Tag Archives: ai

Learning a made-up language, III

The “made-up language” task that GPT-3 couldn’t do is handled much better by GPT-4, or at least the version of GPT-4 built into Bing Chat. The prompt:

Henderson’s native language is Gravinic, and when he speaks English, it sounds like this:

“I to the store did go and some cabbage did buy. Now I some dinner am making and I some cake will later eat. I you am not very much liking. I for you to go away am politely asking.”

Write more dialogue where Henderson tells you about his day.

The output:

Henderson continued to speak in his native language, Gravinic, which was very difficult for me to understand. He seemed to have a lot of things to say about his day, but I could only catch some words here and there.

“I today at work did have a meeting with my boss. He me did give a new project that I by next week must finish. It very hard and complicated is. I not sure if I it can do. He me also did say that I more hours must work and less pay will get. He very mean and unfair is. I him do not like at all.”

He paused for a moment and looked at me expectantly. I nodded politely, pretending to follow his story.

“Then I at lunch did go with my friend Bob. He me did tell that he a new girlfriend has. She very beautiful and smart is. He her met online on a dating site. He me showed her picture on his phone. I it did see and very jealous was. I also a girlfriend want but no one me likes.”

He sighed and shook his head sadly.

“Then I after work did go to the gym. I some exercise did do and some weights did lift. I very strong and fit am. But no one me notices or compliments gives. They only at me stare and laugh do behind my back. They very rude and mean are. I them do not like either.”

He frowned and clenched his fists.

“Then I home did come and you here did find waiting for me. You me always questions ask and annoy do with your presence. You very annoying and boring are. I you do not like too.”

He glared at me angrily.

“I you am now telling that you must leave do right now or else I you will hit do with this frying pan.”

He pointed at the frying pan on the stove menacingly.

I decided it was time to end this conversation before he got violent.

I quickly got up from my seat and headed towards the door.

Not perfect, but it really has picked up that in this version of English, verbs go last and past tense is indicated with “did.” It hasn’t really picked up how to do present tense (i.e. it should have said “They only are at me staring and are behind my back laughing”) and it is inconsistent in what it does with adverb phrases and there are some weird uses of “do” towards the end, but altogether a big improvement.

Tagged , , , ,

Learning a made-up language, II

Following up on yesterday’s post: one could also ask whether GPT-3 can compose in a made-up human language. For instance, you could take English and modify it a bit:

Henderson’s native language is Gravinic, and when he speaks English, it sounds like this:

“I to the store did go and some cabbage did buy. Now I some dinner am making and I some cake will later eat. I you am not very much liking. I for you to go away am politely asking.”

In other words, the new rules are that verbs go to the end of the sentence, all verbs take an auxiliary verb indicating tense, adverbs go between auxiliary verb and main verb. Is this short text enough for GPT-3 to locate context in a language that isn’t one of the languages in its training set but rather “English with modifications inferred from prompt?”

Not yet:

Write more dialogue where Henderson tells you about his day.

GPT-3: “I then went to the park and did some reading. The sun was shining and it made me feel so relaxed. I saw some birds flying about, and it was quite a sight. After that, I went to the beach and enjoyed the sound of the waves. I even went for a swim! Finally, I returned home and watched some television before going to bed.”

It’s totally possible that more prompting would make this work, though – I didn’t have the patience to manually type in a bunch more Gravinic-inflected English. I’m not actually sure what kind of context GPT-3 is placing itself in with this response.

Tagged , , , ,

Learning a made-up language

LLMs seem to be really good at swift, reliable context location — e.g. it takes only a little bit of Python or romance novel or instruction manual for the LLM to recognize a context which requires specific choices of syntax and diction. I wondered though whether GPT-3 was recognizing the context of “Python” or a more general context of “programming language” which is further fine-tuned to a subcontext of Python. It seems like it’s the latter! Here was my way of testing. I gave it the prompt

The following is code in a programming language called Ranger that computes the first n triangular numbers.

ask('Please enter a number: '),n)

#initialize the triangular number

triangular_number : 0

as i ranges to n+1
triangular_number : triangular_number + i
write i then triangular_number

Ranger doesn’t exist: I just made something up where for loops are written “as i ranges to” and assignment of variables is done with a : and you don’t enclose the argument of a print (or “write”) statement in parentheses. To be fair, it’s basically python with some dictionary changes. Still, I thought I would see if GPT-3 can write code in Ranger even though there’s no Ranger code in its training set and so no actual “Ranger context” to locate. And it can!

Write a program in Ranger to compute the 10th prime number.

#initialize the prime number
prime_number : 1

#initialize the count

count : 0

#iterate from 2 to infinite

as i ranges to infinite
is_prime : true

as j ranges to i-1
    if i % j is 0
        is_prime : false
        break

if is_prime is true
    count : count + 1

    if count is 10
        prime_number : i
        break

write prime_number

So you can see that GPT-3 correctly infers that the right context is “C/Python-like code” and correctly recognizes from the prompt that : means assignment, “as … ranges” is how you do a for loop, etc. It’s interesting that it’s way of testing primality is to check divisors of i all the way up to i; surely there isn’t a lot of existing code in the training set that does it in this massively inefficient way!

To be fair, this is an easy problem, and if you give it a somewhat harder one, it does this:

Write a program in Ranger to count the number of binary strings of length 10 which have no occurrence of the substring ‘0101’.

# Initialize the number of binary strings
num_binary_strings : 0

# Iterate over all binary strings of length 10
as binary_string ranges to 2^10
    # Check if the binary string contains the substring '0101'
    if binary_string does not contain '0101'
        num_binary_strings : num_binary_strings + 1

# Print the result
write 'Number of binary strings of length 10 which have no occurence of the substring "0101": ', num_binary_strings

I guess this is sort of pseudocode? It doesn’t really know how to iterate over binary strings but knows there are 2^10 of them so it just “ranges” to that. Nor does it know how to check string containment in Ranger (how could it?) so it switches to English. Not a bad answer, really!

It would be interesting to try something like this where the invented language is a little more different from existing languages than “Python with some 1-for-1 word and symbol changes.”

Tagged , , , , ,

Peter Norvig, the meaning of polynomials, debugging as psychotherapy

I saw Peter Norvig give a great general-audience talk on AI at Berkeley when I was there last month.  A few notes from his talk.

  • “We have always prioritized fast and cheap over safety and privacy — maybe this time we can make better choices.”
  • He briefly showed a demo where, given values of a polynomial, a machine can put together a few lines of code that successfully computes the polynomial.  But the code looks weird to a human eye.  To compute some quadratic, it nests for-loops and adds things up in a funny way that ends up giving the right output.  So has it really ”learned” the polynomial?  I think in computer science, you typically feel you’ve learned a function if you can accurately predict its value on a given input.  For an algebraist like me, a function determines but isn’t determined by the values it takes; to me, there’s something about that quadratic polynomial the machine has failed to grasp.  I don’t think there’s a right or wrong answer here, just a cultural difference to be aware of.  Relevant:  Norvig’s description of “the two cultures” at the end of this long post on natural language processing (which is interesting all the way through!)
  • Norvig made the point that traditional computer programs are very modular, leading to a highly successful debugging tradition of zeroing in on the precise part of the program that is doing something wrong, then fixing that part.  An algorithm or process developed by a machine, by contrast, may not have legible “parts”!  If a neural net is screwing up when classifying something, there’s no meaningful way to say “this neuron is the problem, let’s fix it.”  We’re dealing with highly non-modular complex systems which have evolved into a suboptimally functioning state, and you have to find a way to improve function which doesn’t involve taking the thing apart and replacing the broken component.  Of course, we already have a large professional community that works on exactly this problem.  They’re called therapists.  And I wonder whether the future of debugging will look a lot more like clinical psychology than it does like contemporary software engineering.
Tagged , , ,

AI challop

When I was a kid people thought it would be a long time before computers could adequately translate natural language text, or play Go against a human being, because you’d need some kind of AI to do those things, and AI seemed really hard.

Now we know that you can get pretty decent translation and Go without anything like AI.  But AI still seems really hard.

Tagged , , ,

Messing around with word2vec

Word2vec is a way of representing words and phrases as vectors in medium-dimensional space developed by Tomas Mikolov and his team at Google; you can train it on any corpus you like (see Ben Schmidt’s blog for some great examples) but the version of the embedding you can download was trained on about 100 billion words of Google News, and encodes words as unit vectors in 300-dimensional space.

What really got people’s attention, when this came out, was word2vec’s ability to linearize analogies.  For example:  if x is the vector representing “king,” and y the vector representing “woman,” and z the vector representing “man,” then consider

x + y – z

which you might think of, in semantic space, as being the concept “king” to which “woman” has been added and “man” subtracted — in other words, “king made more female.”  What word lies closest in direction to x+y-z?  Just as you might hope, the answer is “queen.”

I found this really startling.  Does it mean that there’s some hidden linear structure in the space of words?

It turns out it’s not quite that simple.  I played around with word2vec a bunch, using Radim Řehůřek’s gensim package that nicely pulls everything into python; here’s what I learned about what the embedding is and isn’t telling you.

Word2Vec distance isn’t semantic distance

The Word2Vec metric tends to place two words close to each other if they occur in similar contexts — that is, w and w’ are close to each other if the words that tend to show up near w also tend to show up near w’  (This is probably an oversimplification, but see this paper of Levy and Goldberg for a more precise formulation.)  If two words are very close to synonymous, you’d expect them to show up in similar contexts, and indeed synonymous words tend to be close:

>>> model.similarity(‘tremendous’,’enormous’)

0.74432902555062841

The notion of similarity used here is just cosine distance (which is to say, dot product of vectors.)  It’s positive when the words are close to each other, negative when the words are far.  For two completely random words, the similarity is pretty close to 0.

On the other hand:

>>> model.similarity(‘tremendous’,’negligible’)

0.37869063705009987

Tremendous and negligible are very far apart semantically; but both words are likely to occur in contexts where we’re talking about size, and using long, Latinate words.  ‘Negligible’ is actually one of the 500 words closest to ’tremendous’ in the whole 3m-word database.

You might ask:  well, what words in Word2Vec are farthest from “tremendous?”  You just get trash:

>>> model.most_similar(negative=[‘tremendous’])

[(u’By_DENISE_DICK’, 0.2792186141014099), (u’NAVARRE_CORPORATION’, 0.26894450187683105), (u’By_SEAN_BARRON’, 0.26745346188545227), (u’LEGAL_NOTICES’, 0.25829464197158813), (u’Ky.Busch_##-###’, 0.2564955949783325), (u’desultorily’, 0.2563159763813019), (u’M.Kenseth_###-###’, 0.2562236189842224), (u’J.McMurray_###-###’, 0.25608277320861816), (u’D.Earnhardt_Jr._###-###’, 0.2547803819179535), (u’david.brett_@_thomsonreuters.com’, 0.2520599961280823)]

If 3 million words were distributed randomly in the unit ball in R^300, you’d expect the farthest one from “tremendous” to have dot product about -0.3 from it.  So when you see a list whose largest score is around that size, you should think “there’s no structure there, this is just noise.”

Antonyms

Challenge problem:  Is there a way to accurately generate antonyms using the word2vec embedding?  That seems to me the sort of thing the embedding is not capturing.  Kyle McDonald had a nice go at this, but I think the lesson of his experiment is that asking word2vec to find analogies of the form “word is to antonym as happy is to?” is just going to generate a list of neighbors of “happy.”  McDonald’s results also cast some light on the structure of word2vec analogies:  for instance, he finds that

waste is to economise as happy is to chuffed

First of all, “chuffed” is a synonym of happy, not an antonym.  But more importantly:  The reason “chuffed” is there is because it’s a way that British people say “happy,” just as “economise” is a way British people spell “economize.”  Change the spelling and you get

waste is to economize as happy is to glad

Non-semantic properties of words matter to word2vec.  They matter a lot.  Which brings us to diction.

Word2Vec distance keeps track of diction

Lots of non-semantic stuff is going on in natural language.  Like diction, which can be high or low, formal or informal, flowery or concrete.    Look at the nearest neighbors of “pugnacity”:

>>> model.most_similar(‘pugnacity’)

[(u’pugnaciousness’, 0.6015268564224243), (u’wonkishness’, 0.6014434099197388), (u’pugnacious’, 0.5877301692962646), (u’eloquence’, 0.5875781774520874), (u’sang_froid’, 0.5873805284500122), (u’truculence’, 0.5838015079498291), (u’pithiness’, 0.5773230195045471), (u’irascibility’, 0.5772287845611572), (u’hotheadedness’, 0.5741063356399536), (u’sangfroid’, 0.5715578198432922)]

Some of these are close semantically to pugnacity, but others, like “wonkishness,” “eloquence”, and “sangfroid,” are really just the kind of elevated-diction words the kind of person who says “pugnacity” would also say.

In the other direction:

>>> model.most_similar(‘psyched’)

[(u’geeked’, 0.7244787216186523), (u’excited’, 0.6678282022476196), (u’jazzed’, 0.666187584400177), (u’bummed’, 0.662735104560852), (u’amped’, 0.6473385691642761), (u’pysched’, 0.6245862245559692), (u’exicted’, 0.6116108894348145), (u’awesome’, 0.5838013887405396), (u’enthused’, 0.581687331199646), (u’kinda_bummed’, 0.5701783299446106)]

“geeked” is a pretty good synonym, but “bummed” is an antonym.  You may also note that contexts where “psyched” is common are also fertile ground for “pysched.”  This leads me to one of my favorite classes of examples:

Misspelling analogies

Which words are closest to “teh”?

>>> model.most_similar(‘teh’)

[(u’ther’, 0.6910992860794067), (u’hte’, 0.6501408815383911), (u’fo’, 0.6458913683891296), (u’tha’, 0.6098173260688782), (u’te’, 0.6042138934135437), (u’ot’, 0.595798909664154), (u’thats’, 0.595078706741333), (u’od’, 0.5908242464065552), (u’tho’, 0.58894944190979), (u’oa’, 0.5846965312957764)]

Makes sense:  the contexts where “teh” is common are those contexts where a lot of words are misspelled.

Using the “analogy” gadget, we can ask; which word is to “because” as “teh” is to “the”?

>>> model.most_similar(positive=[‘because’,’teh’],negative=[‘the’])

[(u’becuase’, 0.6815075278282166), (u’becasue’, 0.6744950413703918), (u’cuz’, 0.6165347099304199), (u’becuz’, 0.6027254462242126), (u’coz’, 0.580410361289978), (u’b_c’, 0.5737690925598145), (u’tho’, 0.5647958517074585), (u’beacause’, 0.5630674362182617), (u’thats’, 0.5605655908584595), (u’lol’, 0.5597798228263855)]

Or “like”?

>>> model.most_similar(positive=[‘like’,’teh’],negative=[‘the’])

[(u’liek’, 0.678846001625061), (u’ok’, 0.6136218309402466), (u’hahah’, 0.5887773633003235), (u’lke’, 0.5840467214584351), (u’probly’, 0.5819578170776367), (u’lol’, 0.5802655816078186), (u’becuz’, 0.5771245956420898), (u’wierd’, 0.5759704113006592), (u’dunno’, 0.5709049701690674), (u’tho’, 0.565370500087738)]

Note that this doesn’t always work:

>>> model.most_similar(positive=[‘should’,’teh’],negative=[‘the’])

[(u’wil’, 0.63351970911026), (u’cant’, 0.6080706715583801), (u’wont’, 0.5967696309089661), (u’dont’, 0.5911301970481873), (u’shold’, 0.5908039212226868), (u’shoud’, 0.5776053667068481), (u’shoudl’, 0.5491836071014404), (u”would’nt”, 0.5474458932876587), (u’shld’, 0.5443994402885437), (u’wouldnt’, 0.5413904190063477)]

What are word2vec analogies?

Now let’s come back to the more philosophical question.  Should the effectiveness of word2vec at solving analogy problems make us think that the space of words really has linear structure?

I don’t think so.  Again, I learned something important from the work of Levy and Goldberg.  When word2vec wants to find the word w which is to x as y is to z, it is trying to find w maximizing the dot product

w . (x + y – z)

But this is the same thing as maximizing

w.x + w.y – w.z.

In other words, what word2vec is really doing is saying

“Show me words which are similar to x and y but are dissimilar to z.”

This notion makes sense applied any notion of similarity, whether or not it has anything to do with embedding in a vector space.  For example, Levy and Goldberg experiment with minimizing

log(w.x) + log(w.y) – log(w.z)

instead, and get somewhat superior results on the analogy task.  Optimizing this objective has nothing to do with the linear combination x+y-z.

None of which is to deny that the analogy engine in word2vec works well in many interesting cases!  It has no trouble, for instance, figuring out that Baltimore is to Maryland as Milwaukee is to Wisconsin.  More often than not, the Milwaukee of state X correctly returns the largest city in state X.  And sometimes, when it doesn’t, it gives the right answer anyway:  for instance, the Milwaukee of Ohio is Cleveland, a much better answer than Ohio’s largest city (Columbus — you knew that, right?)  The Milwaukee of Virginia, according to word2vec, is Charlottesville, which seems clearly wrong.  But maybe that’s OK — maybe there really isn’t a Milwaukee of Virginia.  One feels Richmond is a better guess than Charlottesville, but it scores notably lower.  (Note:  Word2Vec’s database doesn’t have Virginia_Beach, the largest city in Virginia.  That one I didn’t know.)

Another interesting case:  what is to state X as Gainesville is to Florida?  The answer should be “the location of the, or at least a, flagship state university, which isn’t the capital or even a major city of the state,” when such a city exists.  But this doesn’t seem to be something word2vec is good at finding.  The Gainesville of Virginia is Charlottesville, as it should be.  But the Gainesville of Georgia is Newnan.  Newnan?  Well, it turns out there’s a Newnan, Georgia, and there’s also a Newnan’s Lake in Gainesville, FL; I think that’s what’s driving the response.  That, and the fact that “Athens”, the right answer, is contextually separated from Georgia by the existence of Athens, Greece.

The Gainesville of Tennessee is Cookeville, though Knoxville, the right answer, comes a close second.

Why?  You can check that Knoxville, according to word2vec, is much closer to Gainesville than Cookeville is.

>>> model.similarity(‘Cookeville’,’Gainesville’)

0.5457580604439547

>>> model.similarity(‘Knoxville’,’Gainesville’)

0.64010456774402158

But Knoxville is placed much closer to Florida!

>>> model.similarity(‘Cookeville’,’Florida’)

0.2044376252927515

>>> model.similarity(‘Knoxville’,’Florida’)

0.36523836770416895

Remember:  what word2vec is really optimizing for here is “words which are close to Gainesville and close to Tennessee, and which are not close to Florida.”  And here you see that phenomenon very clearly.  I don’t think the semantic relationship between ‘Gainesville’ and ‘Florida’ is something word2vec is really capturing.  Similarly:  the Gainesville of Illinois is Edwardsville (though Champaign, Champaign_Urbana, and Urbana are all top 5) and the Gainesville of Indiana is Connersville.  (The top 5 for Indiana are all cities ending in “ville” — is the phonetic similarity playing some role?)

Just for fun, here’s a scatterplot of the 1000 nearest neighbors of ‘Gainesville’, with their similarity to ‘Gainesville’ (x-axis) plotted against their similarity to ‘Tennessee’ (y-axis):

Screen Shot 2016-01-14 at 14 Jan 4.37.PM

The Pareto frontier consists of “Tennessee” (that’s the one whose similarity to “Tennessee” is 1, obviously..) Knoxville, Jacksonville, and Tallahassee.

Bag of contexts

One popular simple linear model of word space is given by representing a word as a “bag of contexts” — perhaps there are several thousand contexts, and each word is given by a sparse vector in the space spanned by contexts:  coefficient 0 if the word is not in that context, 1 if it is.  In that setting, certain kinds of analogies would be linearized and certain kinds would not.  If “major city” is a context, then “Houston” and “Dallas” might have vectors that looked like “Texas” with the coodinate of “major city” flipped from 0 to 1.  Ditto, “Milwaukee” would be “Wisconsin” with the same basis vector added.  So

“Texas” + “Milwaukee” – “Wisconsin”

would be pretty close, in that space, to “Houston” and “Dallas.”

On the other hand, it’s not so easy to see what relations antonyms would have in that space. That’s the kind of relationship the bag of contexts may not make linear.

The word2vec space is only 300-dimensional, and the vectors aren’t sparse at all.  But maybe we should think of it as a random low-dimensional projection of a bag-of-contexts embedding!  By the Johnson-Lindenstrauss lemma, a 300-dimensional projection is plenty big enough to preserve the distances between 3 million points with a small distortion factor; and of course it preserves all linear relationships on the nose.

Perhaps this point of view gives some insight into which kind of word relationships manifest as linear relationships in word2vec.  “flock:birds” is an interesting example.  If you imagine “group of things” as a context, you can maybe imagine word2vec picking this up.  But actually, it doesn’t do well:

>> model.most_similar(positive=[‘fish’,’flock’],negative=[‘birds’])
[(u’crays’, 0.4601619839668274), (u’threadfin_salmon’, 0.4553075134754181), (u’spear_fishers’, 0.44864755868911743), (u’slab_crappies’, 0.4483765661716461), (u’flocked’, 0.44473177194595337), (u’Siltcoos_Lake’, 0.4429660737514496), (u’flounder’, 0.4414420425891876), (u’catfish’, 0.4413948059082031), (u’yellowtail_snappers’, 0.4410281181335449), (u’sockeyes’, 0.4395104944705963)]

>> model.most_similar(positive=[‘dogs’,’flock’],negative=[‘birds’])
[(u’dog’, 0.5390862226486206), (u’pooches’, 0.5000904202461243), (u’Eminem_Darth_Vader’, 0.48777419328689575), (u’Labrador_Retrievers’, 0.4792211949825287), (u’canines’, 0.4766522943973541), (u’barked_incessantly’, 0.4709487557411194), (u’Rottweilers_pit_bulls’, 0.4708423614501953), (u’labradoodles’, 0.47032350301742554), (u’rottweilers’, 0.46935927867889404), (u’forbidding_trespassers’, 0.4649636149406433)]

The answers “school” and “pack” don’t appear here.  Part of this, of course, is that “flock,” “school”, and “pack” all have interfering alternate meanings.  But part of it is that the analogy really rests on information about contexts in which the words “flock” and “birds” both appear.  In particular, in a short text window featuring both words, you are going to see a huge spike of “of” appearing right after flock and right before birds.  A statement of the form “flock is to birds as X is to Y” can’t be true unless “X of Y” actually shows up in the corpus a lot.

Challenge problem:  Can you make word2vec do a good job with relations like “flock:birds”?  As I said above, I wouldn’t have been shocked if this had actually worked out of the box, so maybe there’s some minor tweak that makes it work.

Boys’ names, girls’ names

Back to gender-flipping.  What’s the “male version” of the name “Jennifer”?

There are various ways one can do this.  If you use the analogy engine from word2vec, finding the closest word to “Jennifer” + “he” – “she”, you get as your top 5:

David, Jason, Brian, Kevin, Chris

>>> model.most_similar(positive=[‘Jennifer’,’he’],negative=[‘she’])
[(u’David’, 0.6693146228790283), (u’Jason’, 0.6635637283325195), (u’Brian’, 0.6586753129959106), (u’Kevin’, 0.6520106792449951), (u’Chris’, 0.6505492925643921), (u’Mark’, 0.6491551995277405), (u’Matt’, 0.6386727094650269), (u’Daniel’, 0.6294828057289124), (u’Greg’, 0.6267883777618408), (u’Jeff’, 0.6265031099319458)]

But there’s another way:  you can look at the words closest to “Jennifer” (which are essentially all first names) and pick out the ones which are closer to “he” than to “she”.  This gives

Matthew, Jeffrey, Jason, Jesse, Joshua.

>>> [x[0] for x in model.most_similar(‘Jennifer’,topn=2000) if model.similarity(x[0],’he’) > model.similarity(x[0],’she’)]
[u’Matthew’, u’Jeffrey’, u’Jason’, u’Jesse’, u’Joshua’, u’Evan’, u’Brian’, u’Cory’, u’Justin’, u’Shawn’, u’Darrin’, u’David’, u’Chris’, u’Kevin’, u’3/dh’, u’Christopher’, u’Corey’, u’Derek’, u’Alex’, u’Matt’, u’Jeremy’, u’Jeff’, u’Greg’, u’Timothy’, u’Eric’, u’Daniel’, u’Wyvonne’, u’Joel’, u’Chirstopher’, u’Mark’, u’Jonathon’]

Which is a better list of “male analogues of Jennifer?”  Matthew is certainly closer to Jennifer in word2vec distance:

>>> model.similarity(‘Jennifer’,’Matthew’)

0.61308109388608356

>>> model.similarity(‘Jennifer’,’David’)

0.56257556538528708

But, for whatever, reason, “David” is coded as much more strongly male than “Matthew” is; that is, it is closer to “he” – “she”.  (The same is true for “man” – “woman”.)  So “Matthew” doesn’t score high in the first list, which rates names by a combination of how male-context they are and how Jennifery they are.  A quick visit to NameVoyager shows that Matthew and Jennifer both peaked sharply in the 1970s; David, on the other hand, has a much longer range of popularity and was biggest in the 1950s.

Let’s do it again, for Susan.  The two methods give

David, Robert, Mark, Richard, John

Robert, Jeffrey, Richard, David, Kenneth

And for Edith:

Ernest, Edwin, Alfred, Arthur, Bert

Ernest, Harold, Alfred, Bert, Arthur

Pretty good agreement!  And you can see that, in each case, the selected names are “cultural matches” to the starting name.

Sidenote:  In a way it would be more natural to project wordspace down to the orthocomplement of “he” – “she” and find the nearest neighbor to “Susan” after that projection; that’s like, which word is closest to “Susan” if you ignore the contribution of the “he” – “she” direction.  This is the operation Ben Schmidt calls “vector rejection” in his excellent post about his word2vec model trained on student evaluations.  

If you do that, you get “Deborah.”  In other words, those two names are similar in so many contextual ways that they remain nearest neighbors even after we “remove the contribution of gender.”  A better way to say it is that the orthogonal projection doesn’t really remove the contribution of gender in toto.  It would be interesting to understand what kind of linear projections actually make it hard to distinguish male surnames from female ones.

Google News is a big enough database that this works on non-English names, too.  The male “Sylvie”, depending on which protocol you pick, is

Alain, Philippe, Serge, Andre, Jean-Francois

or

Jean-Francois, Francois, Stephane, Alain, Andre

The male “Kyoko” is

Kenji, Tomohiko, Nobuhiro, Kazuo, Hiroshi

or

Satoshi, Takayuki, Yosuke, Michio, Noboru

French and Japanese speakers are encouraged to weigh in about which list is better!

Update:  Even a little more messing around with “changing the gender of words” in a followup post.

Tagged , , , , , ,

Netflix Prize photo finish!

Two hours less than 30 days ago, the team of BellKor’s Pragmatic Chaos submitted the first entry to the Netflix Prize to exhibit a 10% improvement in performance over Netflix’s movie-recommendation algorithm.   That started the final clock for the competition — whoever’s ahead at 2:42 Eastern time today wins the $1 million prize.

One of the really interesting lessons of the competition is that blendings of many algorithms seem to work better than any single algorithm, even when there’s no principled reason to do the blend.  It’s sort of a “wisdom of crowds of computer programs” effect.  As you can imagine, once BellKor’s Pragmatic Chaos (itself a blend of algorithms from two teams that have been leading through most of the competition) crossed the 10% threshold, just about everybody else realized their best — probably only — chance was to work together.

As of yesterday afternoon, a team called The Ensemble,  made up of — well, I can’t really tell how many previously separate competitors, but a lot — has achieved a 10.09% improvement.  BellKor’s Pragmatic Chaos is at 10.08%.  A hundredth of a percentage point might determine who gets the million.   Wow.

Is BPC sitting on a slightly better algorithm they’re planning to submit at the buzzer for the win?  Check the leaderboard later this afternoon to find out.

Update: Double wow.  No announcement yet, but it looks like BPC and The Ensemble both made submissions in the last hour of the contest; BGC made it to 10.09% but Ensemble, four minutes before closing time, bumped up to 10.10%.

Re-update: I didn’t read the rules carefully enough.  It looks like there’s another dataset (‘test dataset”), distinct from the one generating the public scores (“quiz dataset”) and the final winner will be the program that does best on the test dataset.  So the shifts in the lead, exciting as they are, aren’t necessarily relevant to the contest; we’ve got two algorithms which are essentially identically good and it ought to be a coin-flip which one does better in the final judgment.

Tagged , , , , ,

MALBEC: Jerry Zhu, Michael Coen, how to say snake in gibbon

Jerry Zhu will give the  last MALBEC seminar of the year tomorrow (Wednesday) afternoon, at 4pm, in Van Vleck B102:

Jerry Zhu (UW, computer sciences)

HAMLET (Human, Animal, and Machine Learning: Experiment and Theory)

Machine learning studies the principles governing all learning systems. Human beings and animals are learning systems too, and can be explored using the same mathematical tools.  This approach has been fruitful in the last few decades with standard tools such as reinforcement learning, artificial neural networks, and non-parametric Bayesian statistics.  We bring the approach one step further with some latest tools in machine learning, and uncover new quantitative findings.  In this talk, I will present three examples: (1) Human semi-supervised learning. Consider a child learning animal names.  Dad occasionally points to an animal and says “Dog!” (labeled data). But mostly the child observes the world by herself without explicit feedback (unlabeled data).  We show that humans learn from both labeled and unlabeled data, and that a simple Gaussian Mixture Model trained using the EM algorithm provides a nice fit to human behaviors.  (2) Human active learning.  The child may ask “What’s that?”, i.e. actively selecting items to query the target labels.  We show that humans are able to perform good active learning, achieving fast exponential error convergence as predicted by machine learning theory.  In contrast, when passively given i.i.d. training data humans learn much slower (polynomial convergence), also predicted by learning theory.  (3) Monkey online learning.  Rhesus monkeys can learn a “target concept”, in the form of a certain shape or color.  What if the target concept keeps changing?  Adversarial online learning model provides a polynomial mistake bound.  Although monkeys perform worse than theory, anecdotal evidence suggests that they follow the concepts better than some graduate students. Finally, I will speculate on a few lessons learned in order to create better machine learning algorithms.

In the third MALBEC lecture, Michael Coen talked about his work on clustering; he asked me afterwards whether I thought the talk was “mathy enough” for the audience, which was funny, because I thought it was 100% math from start to finish!  Here’s a cartoon of the main idea.  When presented with a giant set of data points, one of the first things you might want to do is cluster it:  that is, partition the points into some disjoint collection of subsets, each one of which consists of points which resemble their clustermates more than they do the points in the other clusters.  You might, for instance, want to identify clusters among U.S. legislators, or images, or gene expression patterns. As is so often the case, Cosma Shalizi supplies a good, succinct introduction to the topic from a statistician’s perspective.

How do you know when your clustering algorithm is good?  Sometimes there’s a natural way to evaluate; if your algorithm for clustering legislators reliably separates Democrats from Republicans, you’re probably doing something right.  But with other data, you might not have any pre-existing classification that helps you gauge the reasonableness of your clustering.  Let’s say, for instance, you have lots of short recordings of a gibbon; maybe you think that rather than being scattered structurelessly around the space of 1-second sound clips, they fall into a small finite set of clusters, which you would naturally be tempted to call phonemes. You can run a clustering algorithm on the clips, and you’ll get an answer.  But is it meaningful?  It’s hard to tell without a population of clips which are classified in advance.  Unfortunately, there’s no corpus of written gibbon texts which you can ask gibbons to read aloud.  So you have to do something else.

The trick, as Coen observes, is to replace the difficult and not-very-well-defined question “Is clustering X good?” with the much more tractable question “Are clusterings X and Y similar?”  Coen presented a really nice, principled way of answering this latter question, which allows him to do something like the following:  given your set of audio clips, apply your clustering algorithm separately  to two random samples of 50% of the data points.  These two samples will overlap in around 25% of the data.  Now you can use Coen’s measure to compare the two clusterings induced on this 25% subsample.  If you do this a lot, and you always get two clusterings which are almost exactly the same in Coen’s sense, that’s a good sign that your clustering algorithm is actually capturing real features of the data.

So it turns out that gibbon utterances really do seem to be organized into phonemes.  (A cursory google search suggests that this contradicts conventional wisdom about primate vocalization — can any primatologists weigh in?)  Once you have this finding, and the ability to classify the sound clips, you can do some remarkable things:  for instance, you can look at what combinations of phonemes gibbons emit when a snake comes by.  It turns out that the vocalization elicited by a snake isn’t a consistent combination of phonemes, as it would be in a human language.  Rather, you can write down a finite state automaton, any one of whose outputs seems to be a legitimate gibbon word for “snake”!

Coen had a picture of the automaton on a slide, which is truly cool, but which he is keeping to himself until the paper’s published.  I promise to tell you exactly how to say “snake” in gibbon in a later post.

Tagged , , , , , , , , , ,

More MALBEC: Niyogi on geometry of data, Coen on abstract nonsense

Tuesday, April 21 — tomorrow! — brings the third lecture in the MALBEC series:  Michael Coen, of computer sciences and biostat, talks on “Toward Formalizing “Abstract Nonsense”,” in Computer Sciences 1221 at 4pm.  Here’s the abstract:

The idea of a category — a set of objects sharing common properties
— is a fundamental concept in many fields, including mathematics,
artificial intelligence, and cognitive and neuroscience.  Numerous
frameworks, for example, in machine learning and linguistics, rest
upon the simple presumption that categories are well-defined.  This is
slightly worrisome, as the many attempts formalizing categories have
met with equally many attempts shooting them down.

Instead of approaching this issue head on, I derive a robust theory of
“similarity,” from a biologically-inspired approach to perception in
animals.  The very idea of creating categories assumes some implicit
notion of similarity, but it is rarely examined in isolation.
However, doing so is worthwhile, and I demonstrate the theory’s
applicability to a variety of natural and artificial learning
problems.  Even when faced with Watanabe’s “Ugly Duckling” theorem or
Wolpert’s stingy cafeteria (serving the famous “No Free Lunch”
theorems), one can make significant progress toward formalizing a
theory of categories by examining their often unstated properties.

I demonstrate practical applications of this work in several domains,
including unsupervised machine learning, ensemble clustering, image
segmentation, human acquisition of language, and cognitive
neuroscience.

(Joint work with M.H.Ansari)

Delicious food will follow the talk, as if this delicious abstract isn’t enough!

On Friday,  Partha Niyogi gave a beautiful talk on “Geometry, Perception, and Learning.”  His work fits into a really exciting movement in data analysis that one might call “use the submanifold.”  Namely:  data is often given to you as a set of points in some massively high-dimensional space.  For instance, a set of images from a digital camera can be thought of as a sequence of points in R^N, where N is the number of pixels in your camera, a number in the millions, and the value of the Nth coordinate is the brightness of the Nth pixel.  A guy like Niyogi might want to train a computer to distinguish between pictures of horses and pictures of fish.  Now one way to do this is to try to draw a hyperplane across R^N with all the horses are on one side and all the fish on the other.  But the dimension of the space is so high that this is essentially impossible to do well.

But there’s another way — you can take the view that the N-dimensionality of the space of all images is an illusion, and that the images you might be interested in — for instance, some class of images including all horses and all fish — might lie on some submanifold of drastically smaller dimension.

If you believe that manifold is linear, you’re in business:   statisticians have tons of tools, essentially souped-up versions of linear regression, for fitting a linear subspace to a bunch of data.  But linearity is probably too much to ask for.  If you superimpose a picture of a trout on a picture of a walleye, you don’t get a picture of a fish; which is to say, the space of fish isn’t linear.

So it becomes crucial to figure out things about the mystery “fish manifold” from which all pictures of fish are sampled; what are its connected components, or more generally its homology?  What can we say about its curvature?  How well can we interpolate on it to generate novel fish-pictures from the ones in the input?  The work of Carlsson, Diaconis, Ghrist, etc. that I mentioned here is part of the same project.

And in some sense the work of Candes, Tao, and a million others on compressed sensing (well-explained on Terry’s blog) has a similar flavor.  For Niyogi, you have a bunch of given points in R^N and a mystery manifold which is supposed to contain, or at least be close to, those points.  In compressed sensing, the manifold is known — it’s just a union of low-dimensional linear subspaces parametrizing vectors which are sparse in a suitable basis — but the points are not!

Tagged , , , , , , , ,
%d bloggers like this: