Our friend Josh Tenenbaum, a psychologist at M.I.T., is in town to plenarize this Thursday at 3pm at the First Annual UW Cognitive Science Conference. He hasn’t posted a title, but he’ll be talking about his research on Bayesian models of cognition. What makes a model “Bayesian” is a close attention to a priori probabilities, usually called priors.
For instance: suppose you’re an infant, trying to figure out how language works. You notice that when you wake up, your father points out the window and says “Hello, sun!” Pretty soon you figure out that the bright light he’s pointing at is called “sun.” But the evidence presented to you is just as consistent with the theory that, when you’ve just woken up, the word “sun” refers to the bright light out the window — but before bedtime it means “the stuffed monkey on the dresser.” How, infant, do you pick the former theory over the latter? Because you have some set of priors which tells you that words are very unlikely to refer to different things at different times of day. You don’t learn this principle from experience — you start with it, which is what makes it a “prior.” According to Chomsky-style linguistics, you are born with lots of priors about language — you know, for instance, that there’s a fixed order in which the subject, object, and verb of a sentence are supposed to come. If you didn’t have all these priors, you wouldn’t have a chance of learning to talk; there’s an infinitude of theories of language, all consistent with the evidence you encounter. Your priors are what allows you to narrow that down to the one that’s vastly more likely than the sometimes-it’s-a-monkey alternatives.
I don’t think Josh is going to talk about language acquisition, but he is going to talk about the ways that lots of interesting cognitive processes can be described in Bayesian terms, and how to check empirically that these Bayesian descriptions are accurate. Recommended to anyone who likes mathy thinking about thinking.