Category Archives: computers

LMFDB!

Very happy to see that the L-functions and Modular Forms Database is now live!

When I was a kid we looked up our elliptic curves in Cremona’s tables, on paper.  Then William Stein created the Modular Forms Database (you can still go there but it doesn’t really work) and suddenly you could look at the q-expansions of cusp forms in whatever weight and level you wanted, up to the limits of what William had computed.

The LMFDB is a sort of massively souped up version of Cremona and Stein, put together by a team of dozens and dozens of number theorists, including too many friends of mine to name individually.  And it’s a lot more than what the title suggests:  the incredibly useful Jones-Roberts database of local fields is built in; there’s a database of genus 2 curves over Q with small conductor; it even has Maass forms!  I’ve been playing with it all night.  It’s like an adventure playground for number theorists.

One of my first trips through Stein’s database came when I was a postdoc and was thinking about Ljunggren’s equation y^2 + 1 = 2x^4.  This equation has a large solution, (13,239), which has to do with the classical identity

\pi/4 = 4\arctan(1/5) - \arctan(1/239).

It turns out, as I explain in an old survey paper, that the existence of such a large solution is “explained” by the presence of a certain weight-2 cuspform in level 1024 whose mod-5 Galois representation is reducible.

With the LMFDB, you can easily wander around looking for more such examples!  For instance, you can very easily ask the database for non-CM elliptic curves whose mod-7 Galois representation is nonsurjective.  Among those, you can find this handsome curve of conductor 1296, which has very large height relative to its conductor.  Applying the usual Frey curve trick you can turn this curve into the Diophantine oddity

3*48383^2 – (1915)^3 = 2^13.

Huh — I wonder whether people ever thought about this Diophantine problem, when can the difference between a cube and three times a square be a power of 2?  Of course they did!  I just Googled

48383 Diophantine

and found this paper of Stanley Rabinowitz from 1978, which finds all solutions to that problem, of which this one is the largest.

Now whether you can massage this into an arctan identity, that I don’t know!

 

 

Tagged , , , , , ,

AI challop

When I was a kid people thought it would be a long time before computers could adequately translate natural language text, or play Go against a human being, because you’d need some kind of AI to do those things, and AI seemed really hard.

Now we know that you can get pretty decent translation and Go without anything like AI.  But AI still seems really hard.

Tagged , , ,

Gendercycle: a dynamical system on words

By the way, here’s another fun word2vec trick.  Following Ben Schmidt, you can try to find “gender-neutralized synonyms” — words which are close to each other except for the fact that they have different gender connotations.   A quick and dirty way to “mascify” a word is to find its nearest neighbor which is closer to “he” than “she”:

def mascify(y): return [x[0] for x in model.most_similar(y,topn=200) if model.similarity(x[0],’she’) < model.similarity(x[0],’he’)][0]

“femify” is defined similarly.  We could put a threshold away from 0 in there, if we wanted to restrict to more strongly gender-coded words.

Anyway, if you start with a word and run mascify and femify alternately, you can ask whether you eventually wind up in a 2-cycle:  a pair of words which are each others gender counterparts in this loose sense.

e.g.

gentle -> easygoing -> chatty -> talkative -> chatty -> …..

So “chatty” and “talkative” are a pair, with “chatty” female-coded and “talkative” male-coded.

beautiful -> magnificent -> wonderful -> marvelous -> wonderful -> …

So far, I keep hitting 2-cycles, and pretty quickly, though I don’t see why a longer cycle wouldn’t be possible or likely.  Update:  Kevin in comments explains very nicely why it has to terminate in a 2-cycle!

Some other pairs, female-coded word first:

overjoyed / elated

strident / vehement

fearful / worried

furious / livid

distraught / despondent

hilarious / funny

exquisite / sumptuous

thought_provoking / insightful

kick_ass / badass

Sometimes it’s basically giving the same word, in two different forms or with one word misspelled:

intuitive / intuitively

buoyant / bouyant

sad / Sad

You can do this for names, too, though you have to make the “topn” a little longer to find matches.  I found:

Jamie / Chris

Deborah / Jeffrey

Fran / Pat

Mary / Joseph

Pretty good pairs!  Note that you hit a lot of gender-mixed names (Jamie, Chris, Pat), just as you might expect:  the male-biased name word2vec-closest to a female name is likely to be a name at least some women have!  You can deal with this by putting in a threshold:

>> def mascify(y): return [x[0] for x in model.most_similar(y,topn=1000) if model.similarity(x[0],’she’) < model.similarity(x[0],’he’) – 0.1][0]

This eliminates “Jamie” and “Pat” (though “Chris” still reads as male.)

Now we get some new pairs:

Jody / Steve (this one seems to have a big basis of attraction, it shows up from a lot of initial conditions)

Kasey / Zach

Peter / Catherine (is this a Russia thing?)

Nicola / Dominic

Alison / Ian

 

 

 

 

 

Tagged , , , ,

Messing around with word2vec

Word2vec is a way of representing words and phrases as vectors in medium-dimensional space developed by Tomas Mikolov and his team at Google; you can train it on any corpus you like (see Ben Schmidt’s blog for some great examples) but the version of the embedding you can download was trained on about 100 billion words of Google News, and encodes words as unit vectors in 300-dimensional space.

What really got people’s attention, when this came out, was word2vec’s ability to linearize analogies.  For example:  if x is the vector representing “king,” and y the vector representing “woman,” and z the vector representing “man,” then consider

x + y – z

which you might think of, in semantic space, as being the concept “king” to which “woman” has been added and “man” subtracted — in other words, “king made more female.”  What word lies closest in direction to x+y-z?  Just as you might hope, the answer is “queen.”

I found this really startling.  Does it mean that there’s some hidden linear structure in the space of words?

It turns out it’s not quite that simple.  I played around with word2vec a bunch, using Radim Řehůřek’s gensim package that nicely pulls everything into python; here’s what I learned about what the embedding is and isn’t telling you.

Word2Vec distance isn’t semantic distance

The Word2Vec metric tends to place two words close to each other if they occur in similar contexts — that is, w and w’ are close to each other if the words that tend to show up near w also tend to show up near w’  (This is probably an oversimplification, but see this paper of Levy and Goldberg for a more precise formulation.)  If two words are very close to synonymous, you’d expect them to show up in similar contexts, and indeed synonymous words tend to be close:

>>> model.similarity(‘tremendous’,’enormous’)

0.74432902555062841

The notion of similarity used here is just cosine distance (which is to say, dot product of vectors.)  It’s positive when the words are close to each other, negative when the words are far.  For two completely random words, the similarity is pretty close to 0.

On the other hand:

>>> model.similarity(‘tremendous’,’negligible’)

0.37869063705009987

Tremendous and negligible are very far apart semantically; but both words are likely to occur in contexts where we’re talking about size, and using long, Latinate words.  ‘Negligible’ is actually one of the 500 words closest to ’tremendous’ in the whole 3m-word database.

You might ask:  well, what words in Word2Vec are farthest from “tremendous?”  You just get trash:

>>> model.most_similar(negative=[‘tremendous’])

[(u’By_DENISE_DICK’, 0.2792186141014099), (u’NAVARRE_CORPORATION’, 0.26894450187683105), (u’By_SEAN_BARRON’, 0.26745346188545227), (u’LEGAL_NOTICES’, 0.25829464197158813), (u’Ky.Busch_##-###’, 0.2564955949783325), (u’desultorily’, 0.2563159763813019), (u’M.Kenseth_###-###’, 0.2562236189842224), (u’J.McMurray_###-###’, 0.25608277320861816), (u’D.Earnhardt_Jr._###-###’, 0.2547803819179535), (u’david.brett_@_thomsonreuters.com’, 0.2520599961280823)]

If 3 million words were distributed randomly in the unit ball in R^300, you’d expect the farthest one from “tremendous” to have dot product about -0.3 from it.  So when you see a list whose largest score is around that size, you should think “there’s no structure there, this is just noise.”

Antonyms

Challenge problem:  Is there a way to accurately generate antonyms using the word2vec embedding?  That seems to me the sort of thing the embedding is not capturing.  Kyle McDonald had a nice go at this, but I think the lesson of his experiment is that asking word2vec to find analogies of the form “word is to antonym as happy is to?” is just going to generate a list of neighbors of “happy.”  McDonald’s results also cast some light on the structure of word2vec analogies:  for instance, he finds that

waste is to economise as happy is to chuffed

First of all, “chuffed” is a synonym of happy, not an antonym.  But more importantly:  The reason “chuffed” is there is because it’s a way that British people say “happy,” just as “economise” is a way British people spell “economize.”  Change the spelling and you get

waste is to economize as happy is to glad

Non-semantic properties of words matter to word2vec.  They matter a lot.  Which brings us to diction.

Word2Vec distance keeps track of diction

Lots of non-semantic stuff is going on in natural language.  Like diction, which can be high or low, formal or informal, flowery or concrete.    Look at the nearest neighbors of “pugnacity”:

>>> model.most_similar(‘pugnacity’)

[(u’pugnaciousness’, 0.6015268564224243), (u’wonkishness’, 0.6014434099197388), (u’pugnacious’, 0.5877301692962646), (u’eloquence’, 0.5875781774520874), (u’sang_froid’, 0.5873805284500122), (u’truculence’, 0.5838015079498291), (u’pithiness’, 0.5773230195045471), (u’irascibility’, 0.5772287845611572), (u’hotheadedness’, 0.5741063356399536), (u’sangfroid’, 0.5715578198432922)]

Some of these are close semantically to pugnacity, but others, like “wonkishness,” “eloquence”, and “sangfroid,” are really just the kind of elevated-diction words the kind of person who says “pugnacity” would also say.

In the other direction:

>>> model.most_similar(‘psyched’)

[(u’geeked’, 0.7244787216186523), (u’excited’, 0.6678282022476196), (u’jazzed’, 0.666187584400177), (u’bummed’, 0.662735104560852), (u’amped’, 0.6473385691642761), (u’pysched’, 0.6245862245559692), (u’exicted’, 0.6116108894348145), (u’awesome’, 0.5838013887405396), (u’enthused’, 0.581687331199646), (u’kinda_bummed’, 0.5701783299446106)]

“geeked” is a pretty good synonym, but “bummed” is an antonym.  You may also note that contexts where “psyched” is common are also fertile ground for “pysched.”  This leads me to one of my favorite classes of examples:

Misspelling analogies

Which words are closest to “teh”?

>>> model.most_similar(‘teh’)

[(u’ther’, 0.6910992860794067), (u’hte’, 0.6501408815383911), (u’fo’, 0.6458913683891296), (u’tha’, 0.6098173260688782), (u’te’, 0.6042138934135437), (u’ot’, 0.595798909664154), (u’thats’, 0.595078706741333), (u’od’, 0.5908242464065552), (u’tho’, 0.58894944190979), (u’oa’, 0.5846965312957764)]

Makes sense:  the contexts where “teh” is common are those contexts where a lot of words are misspelled.

Using the “analogy” gadget, we can ask; which word is to “because” as “teh” is to “the”?

>>> model.most_similar(positive=[‘because’,’teh’],negative=[‘the’])

[(u’becuase’, 0.6815075278282166), (u’becasue’, 0.6744950413703918), (u’cuz’, 0.6165347099304199), (u’becuz’, 0.6027254462242126), (u’coz’, 0.580410361289978), (u’b_c’, 0.5737690925598145), (u’tho’, 0.5647958517074585), (u’beacause’, 0.5630674362182617), (u’thats’, 0.5605655908584595), (u’lol’, 0.5597798228263855)]

Or “like”?

>>> model.most_similar(positive=[‘like’,’teh’],negative=[‘the’])

[(u’liek’, 0.678846001625061), (u’ok’, 0.6136218309402466), (u’hahah’, 0.5887773633003235), (u’lke’, 0.5840467214584351), (u’probly’, 0.5819578170776367), (u’lol’, 0.5802655816078186), (u’becuz’, 0.5771245956420898), (u’wierd’, 0.5759704113006592), (u’dunno’, 0.5709049701690674), (u’tho’, 0.565370500087738)]

Note that this doesn’t always work:

>>> model.most_similar(positive=[‘should’,’teh’],negative=[‘the’])

[(u’wil’, 0.63351970911026), (u’cant’, 0.6080706715583801), (u’wont’, 0.5967696309089661), (u’dont’, 0.5911301970481873), (u’shold’, 0.5908039212226868), (u’shoud’, 0.5776053667068481), (u’shoudl’, 0.5491836071014404), (u”would’nt”, 0.5474458932876587), (u’shld’, 0.5443994402885437), (u’wouldnt’, 0.5413904190063477)]

What are word2vec analogies?

Now let’s come back to the more philosophical question.  Should the effectiveness of word2vec at solving analogy problems make us think that the space of words really has linear structure?

I don’t think so.  Again, I learned something important from the work of Levy and Goldberg.  When word2vec wants to find the word w which is to x as y is to z, it is trying to find w maximizing the dot product

w . (x + y – z)

But this is the same thing as maximizing

w.x + w.y – w.z.

In other words, what word2vec is really doing is saying

“Show me words which are similar to x and y but are dissimilar to z.”

This notion makes sense applied any notion of similarity, whether or not it has anything to do with embedding in a vector space.  For example, Levy and Goldberg experiment with minimizing

log(w.x) + log(w.y) – log(w.z)

instead, and get somewhat superior results on the analogy task.  Optimizing this objective has nothing to do with the linear combination x+y-z.

None of which is to deny that the analogy engine in word2vec works well in many interesting cases!  It has no trouble, for instance, figuring out that Baltimore is to Maryland as Milwaukee is to Wisconsin.  More often than not, the Milwaukee of state X correctly returns the largest city in state X.  And sometimes, when it doesn’t, it gives the right answer anyway:  for instance, the Milwaukee of Ohio is Cleveland, a much better answer than Ohio’s largest city (Columbus — you knew that, right?)  The Milwaukee of Virginia, according to word2vec, is Charlottesville, which seems clearly wrong.  But maybe that’s OK — maybe there really isn’t a Milwaukee of Virginia.  One feels Richmond is a better guess than Charlottesville, but it scores notably lower.  (Note:  Word2Vec’s database doesn’t have Virginia_Beach, the largest city in Virginia.  That one I didn’t know.)

Another interesting case:  what is to state X as Gainesville is to Florida?  The answer should be “the location of the, or at least a, flagship state university, which isn’t the capital or even a major city of the state,” when such a city exists.  But this doesn’t seem to be something word2vec is good at finding.  The Gainesville of Virginia is Charlottesville, as it should be.  But the Gainesville of Georgia is Newnan.  Newnan?  Well, it turns out there’s a Newnan, Georgia, and there’s also a Newnan’s Lake in Gainesville, FL; I think that’s what’s driving the response.  That, and the fact that “Athens”, the right answer, is contextually separated from Georgia by the existence of Athens, Greece.

The Gainesville of Tennessee is Cookeville, though Knoxville, the right answer, comes a close second.

Why?  You can check that Knoxville, according to word2vec, is much closer to Gainesville than Cookeville is.

>>> model.similarity(‘Cookeville’,’Gainesville’)

0.5457580604439547

>>> model.similarity(‘Knoxville’,’Gainesville’)

0.64010456774402158

But Knoxville is placed much closer to Florida!

>>> model.similarity(‘Cookeville’,’Florida’)

0.2044376252927515

>>> model.similarity(‘Knoxville’,’Florida’)

0.36523836770416895

Remember:  what word2vec is really optimizing for here is “words which are close to Gainesville and close to Tennessee, and which are not close to Florida.”  And here you see that phenomenon very clearly.  I don’t think the semantic relationship between ‘Gainesville’ and ‘Florida’ is something word2vec is really capturing.  Similarly:  the Gainesville of Illinois is Edwardsville (though Champaign, Champaign_Urbana, and Urbana are all top 5) and the Gainesville of Indiana is Connersville.  (The top 5 for Indiana are all cities ending in “ville” — is the phonetic similarity playing some role?)

Just for fun, here’s a scatterplot of the 1000 nearest neighbors of ‘Gainesville’, with their similarity to ‘Gainesville’ (x-axis) plotted against their similarity to ‘Tennessee’ (y-axis):

Screen Shot 2016-01-14 at 14 Jan 4.37.PM

The Pareto frontier consists of “Tennessee” (that’s the one whose similarity to “Tennessee” is 1, obviously..) Knoxville, Jacksonville, and Tallahassee.

Bag of contexts

One popular simple linear model of word space is given by representing a word as a “bag of contexts” — perhaps there are several thousand contexts, and each word is given by a sparse vector in the space spanned by contexts:  coefficient 0 if the word is not in that context, 1 if it is.  In that setting, certain kinds of analogies would be linearized and certain kinds would not.  If “major city” is a context, then “Houston” and “Dallas” might have vectors that looked like “Texas” with the coodinate of “major city” flipped from 0 to 1.  Ditto, “Milwaukee” would be “Wisconsin” with the same basis vector added.  So

“Texas” + “Milwaukee” – “Wisconsin”

would be pretty close, in that space, to “Houston” and “Dallas.”

On the other hand, it’s not so easy to see what relations antonyms would have in that space. That’s the kind of relationship the bag of contexts may not make linear.

The word2vec space is only 300-dimensional, and the vectors aren’t sparse at all.  But maybe we should think of it as a random low-dimensional projection of a bag-of-contexts embedding!  By the Johnson-Lindenstrauss lemma, a 300-dimensional projection is plenty big enough to preserve the distances between 3 million points with a small distortion factor; and of course it preserves all linear relationships on the nose.

Perhaps this point of view gives some insight into which kind of word relationships manifest as linear relationships in word2vec.  “flock:birds” is an interesting example.  If you imagine “group of things” as a context, you can maybe imagine word2vec picking this up.  But actually, it doesn’t do well:

>> model.most_similar(positive=[‘fish’,’flock’],negative=[‘birds’])
[(u’crays’, 0.4601619839668274), (u’threadfin_salmon’, 0.4553075134754181), (u’spear_fishers’, 0.44864755868911743), (u’slab_crappies’, 0.4483765661716461), (u’flocked’, 0.44473177194595337), (u’Siltcoos_Lake’, 0.4429660737514496), (u’flounder’, 0.4414420425891876), (u’catfish’, 0.4413948059082031), (u’yellowtail_snappers’, 0.4410281181335449), (u’sockeyes’, 0.4395104944705963)]

>> model.most_similar(positive=[‘dogs’,’flock’],negative=[‘birds’])
[(u’dog’, 0.5390862226486206), (u’pooches’, 0.5000904202461243), (u’Eminem_Darth_Vader’, 0.48777419328689575), (u’Labrador_Retrievers’, 0.4792211949825287), (u’canines’, 0.4766522943973541), (u’barked_incessantly’, 0.4709487557411194), (u’Rottweilers_pit_bulls’, 0.4708423614501953), (u’labradoodles’, 0.47032350301742554), (u’rottweilers’, 0.46935927867889404), (u’forbidding_trespassers’, 0.4649636149406433)]

The answers “school” and “pack” don’t appear here.  Part of this, of course, is that “flock,” “school”, and “pack” all have interfering alternate meanings.  But part of it is that the analogy really rests on information about contexts in which the words “flock” and “birds” both appear.  In particular, in a short text window featuring both words, you are going to see a huge spike of “of” appearing right after flock and right before birds.  A statement of the form “flock is to birds as X is to Y” can’t be true unless “X of Y” actually shows up in the corpus a lot.

Challenge problem:  Can you make word2vec do a good job with relations like “flock:birds”?  As I said above, I wouldn’t have been shocked if this had actually worked out of the box, so maybe there’s some minor tweak that makes it work.

Boys’ names, girls’ names

Back to gender-flipping.  What’s the “male version” of the name “Jennifer”?

There are various ways one can do this.  If you use the analogy engine from word2vec, finding the closest word to “Jennifer” + “he” – “she”, you get as your top 5:

David, Jason, Brian, Kevin, Chris

>>> model.most_similar(positive=[‘Jennifer’,’he’],negative=[‘she’])
[(u’David’, 0.6693146228790283), (u’Jason’, 0.6635637283325195), (u’Brian’, 0.6586753129959106), (u’Kevin’, 0.6520106792449951), (u’Chris’, 0.6505492925643921), (u’Mark’, 0.6491551995277405), (u’Matt’, 0.6386727094650269), (u’Daniel’, 0.6294828057289124), (u’Greg’, 0.6267883777618408), (u’Jeff’, 0.6265031099319458)]

But there’s another way:  you can look at the words closest to “Jennifer” (which are essentially all first names) and pick out the ones which are closer to “he” than to “she”.  This gives

Matthew, Jeffrey, Jason, Jesse, Joshua.

>>> [x[0] for x in model.most_similar(‘Jennifer’,topn=2000) if model.similarity(x[0],’he’) > model.similarity(x[0],’she’)]
[u’Matthew’, u’Jeffrey’, u’Jason’, u’Jesse’, u’Joshua’, u’Evan’, u’Brian’, u’Cory’, u’Justin’, u’Shawn’, u’Darrin’, u’David’, u’Chris’, u’Kevin’, u’3/dh’, u’Christopher’, u’Corey’, u’Derek’, u’Alex’, u’Matt’, u’Jeremy’, u’Jeff’, u’Greg’, u’Timothy’, u’Eric’, u’Daniel’, u’Wyvonne’, u’Joel’, u’Chirstopher’, u’Mark’, u’Jonathon’]

Which is a better list of “male analogues of Jennifer?”  Matthew is certainly closer to Jennifer in word2vec distance:

>>> model.similarity(‘Jennifer’,’Matthew’)

0.61308109388608356

>>> model.similarity(‘Jennifer’,’David’)

0.56257556538528708

But, for whatever, reason, “David” is coded as much more strongly male than “Matthew” is; that is, it is closer to “he” – “she”.  (The same is true for “man” – “woman”.)  So “Matthew” doesn’t score high in the first list, which rates names by a combination of how male-context they are and how Jennifery they are.  A quick visit to NameVoyager shows that Matthew and Jennifer both peaked sharply in the 1970s; David, on the other hand, has a much longer range of popularity and was biggest in the 1950s.

Let’s do it again, for Susan.  The two methods give

David, Robert, Mark, Richard, John

Robert, Jeffrey, Richard, David, Kenneth

And for Edith:

Ernest, Edwin, Alfred, Arthur, Bert

Ernest, Harold, Alfred, Bert, Arthur

Pretty good agreement!  And you can see that, in each case, the selected names are “cultural matches” to the starting name.

Sidenote:  In a way it would be more natural to project wordspace down to the orthocomplement of “he” – “she” and find the nearest neighbor to “Susan” after that projection; that’s like, which word is closest to “Susan” if you ignore the contribution of the “he” – “she” direction.  This is the operation Ben Schmidt calls “vector rejection” in his excellent post about his word2vec model trained on student evaluations.  

If you do that, you get “Deborah.”  In other words, those two names are similar in so many contextual ways that they remain nearest neighbors even after we “remove the contribution of gender.”  A better way to say it is that the orthogonal projection doesn’t really remove the contribution of gender in toto.  It would be interesting to understand what kind of linear projections actually make it hard to distinguish male surnames from female ones.

Google News is a big enough database that this works on non-English names, too.  The male “Sylvie”, depending on which protocol you pick, is

Alain, Philippe, Serge, Andre, Jean-Francois

or

Jean-Francois, Francois, Stephane, Alain, Andre

The male “Kyoko” is

Kenji, Tomohiko, Nobuhiro, Kazuo, Hiroshi

or

Satoshi, Takayuki, Yosuke, Michio, Noboru

French and Japanese speakers are encouraged to weigh in about which list is better!

Update:  Even a little more messing around with “changing the gender of words” in a followup post.

Tagged , , , , , ,

August linkdump

  • The company that makes OldReader, the RSS reader I fled to after the sad demise of Google Reader, is from Madison!  OK, Middleton.  Still part of Silicon Isthmus.
  • I never new that Mark Alan Stamaty, one of my favorite cartoonists, did the cover of the first They Might Be Giants album.
  • Hey I keep saying this and now Allison Schrager has written an article about it for Bloomberg.  Tenure is a form of compensation.  If you think tenure is a bad way to pay teachers, and that compensation is best in the form of dollars, that’s fine; but if California pretends that the elimination of tenure isn’t a massive pay cut for teachers, they’re making a basic economic mistake.
  • New “hot hand” paper by Brett Green and Jeffrey Zweibel, about the hot hand for batters in baseball.  They say it’s there!  And they echo a point I make in the book (which I learned from Bob Wardrop) — some of the “no such thing as the hot hand” studies are way too low-power to detect a hot hand of any realistic size.
  • Matt Baker goes outside the circle of number theory and blogs about real numbers, axioms, and games.  Daring!  Matt also has a very cool new paper with Yao Wang about spanning trees as torsors for the sandpile group; but I want that to have its own blog entry once I’ve actually read it!
  • Lyndon Hardy wrote a fantasy series I adored as a kid, Master of the Five Magics.  I didn’t know that, as an undergrad, he was the mastermind of the Great Caltech Rose Bowl Hoax.  Now that is a life well spent.
  • Do you know how many players with at least 20 hits in a season have had more than half their hits be home runs?  Just two:  Mark McGwire in 2001 and Frank Thomas in 2005.
Tagged , , , ,

OKCathy

This week’s Aunt Pythia column features Cathy O’Neil’s take on what questions online daters ought to have to answer in their profiles:

How sexual are you? (super important question)
How much fun are you? (people are surprisingly honest when asked this)
How awesome do you smell? (might need to invent technology for this one)
What bothers you more: the big bank bailout or the idea of increasing the minimum wage?
Do you like strong personalities or would you rather things stay polite?
What do you love arguing about more: politics or aesthetics?
Where would you love to visit if you could go anywhere?
Do you want kids?
Dog person or cat person?
Do you sometimes wish the girl could be the hero, and not always fall for the hapless dude at the end?

I gotta say, thinking back to when I was single, during the second Clinton administration, I don’t think these are the questions I personally would most want to ask of my prospective dates.

On the other hand, I think the questions provide a near-perfect portrait of Cathy!  So let me offer my own suggestion:  maybe profiles shouldn’t have any answers.  Maybe they should just have questions.  And you contact the person whose questions you’d like to answer.

What would your questions be?

 

 

Tagged , , ,

We are all Brian Conrad now

The quality of streaming conference talks has improved a ton, to the point where it’s now really worthwhile to watch them, albeit not the same as being there.  Our graduate students and I have been getting together and watching some of the talks from the soiree of the season, the MSRI perfectoid spaces conference.  This has been great and I highly recommend it.

One good thing about watching at home is that you can stop the stream whenever anybody has a question, or whenever you want to expand on a point made by the speaker!  We usually spend 90-100 minutes to watch an hour talk.  One amusing phenomenon:  when we have a question or don’t understand something, we stop and talk it out.  Then, when we start the stream again, we usually see that the speaker has also stopped, because someone in the audience has asked the same question.  This is very reassuring to the graduate students!  What’s confusing to us is invariably also confusing to someone else, even to Brian Conrad, because we decided to always presume that the unseen, unheard questioner was Brian, which is pretty safe, right?  (One time we could sort of hear the question and I’m pretty sure it was Akshay, though.)

Tagged , ,

Is online education good or bad for equality?

It seems like it would obviously be good — now kids who don’t have money and don’t live near universities have, in principle, access to much of the world’s knowledge as long as they have a cheap computer and an internet connection.

But in math, I’ve heard anecdotally that this isn’t really happening.  I thought we were going to see an influx of mathematical talent, smart kids from Mississippi who couldn’t get any math past calculus from their peers, their local high school, or the public library, but who trained themselves hardcore on Art of Problem Solving or Mathematics Stack Exchange.  But I don’t think this is happening so much.  (Correct me if I’m wrong about this!)

I thought about this when I read this article about MOOCs, which says that they’re primarily used by wealthy people who already have college degrees.  What a depressing outcome that would be, if a platform meant to make elite education available free to everybody and help undo the student-loan disaster instead mostly made life easier for people whose lives are already easy, and saved money for people who already have money.

 

 

Tagged

Learn to be a crappy programmer

“If a thing’s worth doing, it’s worth doing well” is a nice old saying, but is it true?  Cathy’s advice column today reminded me of this question, as regards coding.  I think learning to write good code is quite hard.  On the other hand, learning to write fairly crappy yet functional code is drastically less hard.  Drastically less hard and incredibly useful!  For many people, it’s probably the optimal point on the reward/expenditure curve.

It feels somehow wrong to give advice like “Learn to be a crappy programmer” but I think it might actually be good advice.

 

Tagged , , ,

In which I have a quarter-million friends of friends on Facebook

One of the privacy options Facebook allows is “restrict to friends of friends.”  I was discussing with Tom Scocca the question of how many people this actually amounts to.  FB doesn’t seem to offer an easy way to get a definitive accounting, so I decided to use the new Facebook Graph Search to make a quick and dirty estimate.  If you ask it to show you all the friends of your friends, it just tells you that there are more than 1000, but doesn’t supply an exact number.  If you want a count, you have to ask it something more specific, like “How many friends of my friends are named Constance?”

In my case, the answer is 25.

So what does that mean?  Well, according to the amazing NameVoyager, between 100 and 300 babies per million are named Constance, at least in the birthdate range that contains most of Facebook’s user base and, I expect, most of my friends-of-friends (herafter, FoFs) as well.  So under the assumption that my FoFs are as likely as the average American to be named Constance, there should be between 85,000 and 250,000 FoFs.

That assumption is massively unlikely, of course; name choices have strong correlations with geography, ethnicity, and socioeconomic thingamabobs.  But you can just do this redundantly to get a sense of what’s going on.  59 of my FoFs are named Marianne, a name whose frequency ranges from 150-300 parts per million; that suggests a FoF range of about 200-400K.

I did this for a few names (50 Geralds, 18 Charitys (Charities??)) and the overlaps of the ranges seemed to hump at around 250,000, so that’s my vague estimate for the number.

Bu then I remembered that there was actually a paper about this on the arXiv, “The Anatomy of the Facebook Graph,” by Ugander, Karrer, Backstrom, and Marlow, which studies exactly this question.  They found something which is, to me, rather surprising; that the number of FoFs grows approximately linearly in the number of friends.  The appropriate coefficients have surely changed since 2011, but they get a good fit with

#FoF = 355(#friends) – 15057.

For me, with 680 friends, that’s 226,343.  Good fit!

This 2012 study from Pew (on which Marlow is also an author) studies a sample in which the respondents had a mean 245 Facebook friends, and finds that the mean number of FoFs was 156,569.  Interestingly, the linear model from the earlier paper gives only 72,000, though to my eye it looks like 245 is well within the range where the fit to the line is very good.

The math question this suggests:  in the various random-graph models that people like to use to study social networks, what is the mean size of the 2-neighborhood of x (i.e. the number of FoFs) conditional on x having degree k?  Is it ever linear in k, or approximately linear over some large range of k?

Tagged , , ,
%d bloggers like this: