Here are some sequences of vector spaces. In each case, the sequence is indexed by n, and all other variables are understood to be constant. So suppose V_n is the space

H^i(Conf^n M, Q) for M a connected oriented manifold of dimension at least 2.

The (j_1, .. j_r)-multidegree piece of the diagonal coinvariant algebra on r sets of n variables.

H^i(M_{g,n},Q), the cohomology of the moduli space of curves of genus g with n marked points.

The tautological subring of the above.

The space of degree-d polynomials on the rank variety parametrizing nxn matrices of rank at most r.

By a character polynomial we mean a polynomial with integral coefficients in variables X_1, X_2, X_3, … . We interpret these symbols (and thus character polynomials) as class functions on the symmetric group by S_n by taking

X_i(s) = number of i-cycles in s

for each permutation s.

Then we show that, in each of the examples above, there’s a character polynomial P such that the character of the action of S_n on V_n is given by P, for all sufficiently large n. This is one way in which one can say that a sequence of representations of larger and larger symmetric groups are “all the same.” In particular, by plugging in the identity we find that dim V_n is a polynomial in n, for n large enough.

For many of these examples, almost nothing is known about dimensions of individual spaces! So a strong regularity theorem like this is perhaps surprising. Even more surprising (to us at any rate) is that theorems like this require only very meager input from whatever context generate the vector spaces. You get this stability (and many others) almost for free.

Here is the gist. Sometimes life hands you a sequence of vector spaces. Sometimes these vector spaces even come with maps from one to the next. And when you are very lucky, those maps become isomorphisms far enough along in the sequence; because at that point you can describe the entire picture with a finite amount of information, all the vector spaces after a certain point being canonically the same. In this case we typically say we have found a stability result for the sequence.

But sometimes life is not so nice. Say for instance we study the cohomology groups of configuration spaces of points of n distinct ordered points on some nice manifold M. As one does. In fact, let’s fix an index i and a coefficient field k and let V_n be the vector space H^i(Conf^n M, k.)

(In the imaginary world where there are people who memorize every word posted on this blog, those people would remember that I also sometimes use Conf^n M to refer to the space parametrizing unordered n-tuples of distinct points. But now we are ordered. This is important.)

For instance, you can let M be the complex plane, in which case we’re just computing the cohomology of the pure braid group. Or, to put it another way, the cohomology of the hyperplane complement you get by deleting the hyperplanes (x_i-x_j) from C^n.

This cohomology was worked out in full by my emeritus colleagues Peter Orlik and Louis Solomon. But let’s stick to something much easier; what about the H^1? That’s just generated by the classes of the hyperplanes we cut out, which form a basis for the cohomology group. And now you see a problem. If V_n is H^1(Conf^n C, k), then the sequence {V_n} can’t be stable, because the dimensions of the spaces grow with n; to be precise,

dim V_n = (1/2)n(n-1).

But all isn’t lost. As Tom and Benson explained last year in their much-discussed 2010 paper, “Representation stability and homological stability,” the right way to proceed is to think of V_n not as a mere vector space but as a representation of the symmetric group on n letters, which acts on Conf^n by permuting the n points. And as representations, the V_n are in a very real sense all the same! Each one is

“the representation of the symmetric group given by the action on unordered pairs of distinct letters.”

Of course one has to make precise what one means when one says “V_m and V_n are the same symmetric group representation”, when they are after all representations of different groups. Church and Farb do exactly this, and show that in many examples (including the pure braid group) some naturally occuring sequences do satisfy their condition, which they call “representation stability.”

So what’s in the new paper? In a sense, we start from the beginning, defining representation stability in a new way (or rather, defining a new thing and showing that it agrees with the Church-Farb definition in cases of interest.) And this new definition makes everything much cleaner and dramatically expands the range of examples where we can prove stability. This post is already a little long, so I think I’ll start a new one with a list of examples at the top.

This has been your daily Small Sample Size Adventure. Stat courtesy of baseball-reference.

Also: it seems to me that having one regular starter with an ERA of 8, like Brian Matusz, should induce a tendency to have W-L better than Pythagorean W-L, as the Orioles did last year (by 2 games) and which, for the moment, they do again now. You pile up a lot of negative run differential in those games but you can only lose each one once.

Jason Starr asks a great question in the comments to the previous post: if you are a Ph.D. advisor, to what extent do you think you could advise a graduate student who you rarely or never physically met? If you’re a graduate student, to what extent do you think you could thrive if you rarely or never saw your advisor in person?

More about on-line education. One hesitation people have, of course, is that it’s easier to dephysicalize some forms of education than others; and that if higher education gets redefined as something that happens online, the parts of higher education that don’t survive that transition get redefined as “not part of higher education.”

But what about creative writing workshops? Right now, these sit somewhat uncomfortably inside English departments in universities. What are you paying for when you pay tuition to attend a fiction workshop? (I was lucky enough to go to a program with funding, but I think most MFAs don’t work this way.) I think you’re paying to have a known novelist read and think carefully about what you’re writing, and you’re paying to create some official sense that This Is The Year I Write My Novel. (This last part might be the most important. Of course, you could write your novel any time! But having paid a great deal of money with the intent of doing a thing focuses the mind on the task extremely well. Freud always said this was why he charged so much; he didn’t need the money, but the patients needed to spend it.)

What happens if a novelist decides to offer a writing workshop via Google Hangout, to 12 people, charging them much less than university tuition but enough to meet his expenses? Like, say, $3K a person? Does that work? Or, since most novelists probably don’t care to run their own small business, what happens if a startup company collectes well-known but poorly paid novelists and runs the marketing/payment processing side of things, in exchange for a cut?

It’s not clear this is interestingly different from existing distance MFAs like Warren Wilson. Certainly I don’t think you can scale up the offering of “serious and admired writer X read my work closely” to hundreds of thousands of people, which I suppose is a reason it might continue being possible to charge serious money for the service.

An online workshop wouldn’t reproduce what I got out of my MFA program at Johns Hopkins, but I was a special case. I was on break between college and graduate school, I was pretty sure I was going to be a mathematician my whole life, and I really needed to be something else for a year. The people I saw every day that year were writers, the professors whose opinions I valued were writers, the people I drank beer with and argued with and dated were writers. And by the end of the year I was able to call myself a writer without feeling like I was half-kidding; not because I’d written a draft of my novel but because I’d lived in Writerworld for a year.

As promised, a few attacks. I’m sure by tomorrow I’ll have thought of several more. Oh, and also, I meant to link to this Crooked Timber thread about Coursera, with a richly combative comment thread.

I, along with lots of other people who succeed in traditional schools, love text and process it really fast. Other people like other media. Streaming video isn’t the same thing as talking to another person, but it’s plainly closer than text, and talking to another person is the way we’re built to take in information. If streaming video weren’t a useful means of educational transmission for a substantial fraction of people, Khan Academy wouldn’t be popular.

Some people would say that we could get by with many fewer scientists that we have now, without compromising the amount of meaningful science that gets done. That seems too simple to me, but I just want to record that it’s a belief held by many, and on that account maybe a small NSF-funded garden of science is sufficient to our needs.

Online credentials, whether from Udacity or future-ETS, could in principle lead to a massive gain in global equality. Nothing is stopping 300 people from China and Brazil from being among the 500 people Google hires. I was going to say the same thing about inequality within the US but here I have to stop myself; my sense is that massive availability of online resources has not e.g. made it just as good to be a 14-year-old math star in Nebraska as it is to be a 14-year-old math star in suburban Boston. Reader comments on this point welcome, since I know there are lots of former 14-year-old math stars out there.

More on within-US equality; it’s easy to see gains flowing to kids whose parents are rich enough to buy them a prep course or just buy them the time to spend a year at home studying. On the other hand, this seems no less rich-kid-friendly than the current system, in which kids whose parents can afford college graduate debt-free, and the rest, who still have little choice but to attend if they want professional jobs, spend decades of their working life chipping away at a massive debt.

Inspired by such successful endeavors as the Santa Fe Institute, MITs Media Lab, the Harvard-MIT Broad Institute, new cross-disciplinary centers and initiatives such as the Wisconsin Institute for Discovery are designed to overcome many of the obvious limitations of the aging departmental models, which at worst can act as an impediment to creative thinking and synthetic endeavors, and whose reward and promotion mechanisms often exclude some of our most creative minds. Many of these centers — like our most successful technology companies — recognize the power of social life, building cafes, restaurants and lounges directly into the research environment.

But lots of other people think the physical university, at least apart from a few elite schools, is 100% a dead letter, thanks to our new ability to offer courses online at scale. Maybe the future looks like Khan Academy, or Coursera, or Udacity, whose founder, Sebastian Thrum, foresees only 10 institutions offering something called “higher education” 50 years from now.

But what will this thing be?

Keep this in mind. The ability to distribute information at scale is not new, though the Internet makes that information vastly more widespread and, in the long run, cheaper. You don’t need to take a course online to get that information, and it might not even be the best way. For instance, why watch streaming video? There’s a competing channel which is massively faster, more flexible, random-access, which moves at the students’ own pace, which is accessible to speakers of every language, and which is trivially searchable: namely, text. Streaming video has its uses, but streaming video is television; text is the Internet. And text on every imaginable subject is already available on the Internet, to everyone, for free. Getting that information into the hands of every person in the planet with a mobile device is a solved problem.

But:

Information is not what Udacity is selling. And it’s not what existing universities are selling! What we sell, of course, is a credential; a certification, backed by our expertise, that the credentialee has mastered some body of knowledge. At Stanford, they sell that credential to students to help them get jobs. At Udacity, they’re planning to sell the credentials to businesses, to help them select employees. And in a global sense, Stanford and the University of Wisconsin and everybody else are in that business too, because we operate as part of a grand compact between ourselves and the business community. They have agreed that a substantial chunk of the American population will spend four years in college instead of devoting their labor to increasing the GNP, and I assume this is because they believe in the credentials we offer; that students who complete college are better at their jobs, and students who do better in college are better than students who do worse.

We sell credentials; and with the receipts obtained from those sales we educate students and we do research. Udacity hopes to be able to credential just as well (more precisely: maybe just as well and maybe not, but in any event at such larger scale that they provide more information to employers) and to use the resulting revenues to educate students.

But why does education need to be involved? For a few fast-moving topics, Udacity may be able to claim that their lock on the most au courant experts means they’re offering something no one else can. But most topics aren’t fast-moving in that way.

What I wonder is whether the future of education won’t look less like Udacity and more like ETS. Education is expensive. Assessment is cheap. I don’t think future-ETS can provide assessment as accurate as Udacity can. But the nature of disruptive technology, if I understand it correctly, isn’t that it provides something better; it’s that it provides something cheaper and faster which is good enough. The toniest companies of the future might want to see a certificate from Udacity; for everybody else, future-SAT will do.

Not that this is necessarily bad news for Udacity, or for education! Something like Udacity may not need much capital to persist; it can carry on as a boutique operation, serving Google or Google’s successors, and still have enough resources to deliver on-line education to millions of people all over the world.

It’s mostly bad news for research, I think. Because the link between credentialing and research is even more contingent and breakable than the link between credentialing and education. Udacity, as far as I know, is not going to pay people to do research in mathematics, or biology, or physics, or history, or linguistics. Those tasks are, at the moment, part of the universities’ missions, but not part of their business model. There doesn’t have to be a massive research apparatus in the United States; for most of our history, there wasn’t one.

So there’s one future to contemplate. No scientific research except for the small, product-directed gardens within companies and a slightly bigger garden funded by the federal government, the latter no doubt a constant target for budget cuts, like PBS. Kids start work at the end of high school, and those who can find the time study for the future-ETS placement test so they can get a better job. How does that sound?

Important note: I am ambivalent about the correctness of much of what I’ve written here; I am posting this as an experiment, to see what happens if I work out thoughts in public. Next post will consist of attacks on this post, the correctness of which attacks I’m also ambivalent about. Special attention to be paid to the superiority of video to text, and the advantages the version of the future described above might have over the status quo, especially as concerns global equality.

Important note 2: Before commenting, please listen to “God Save The Queen,” as I did before beginning this post. It’s sort of a mental prerequisite for talking about the future.

It is said that, after his wife had died in his arms, he rushed to the piano to express his grief; but soon, becoming interested in the airs he was originating, he forgot both his grief and the cause of it so completely, that, when his servant interrupted him to ask about communicating the recent event to the neighbors, Giorgio jumped up in a puzzle, and went to his wife’s room to consult her.

I’m happy to report on another very successful hiring year for the UW-Madison math department! We added Dima Arinkin from UNC, who does algebraic geometry with connections to geometric Langlands; Betsy Stovall in harmonic analysis, from UCLA; our former Ph.D. student Bing Wang, returning to Madison after a postdoc at Princeton; Saverio Spagnolie, in fluid dynamics and biomechanics (who’s so organized he’s already put up a UW homepage!); and probabilist Sebastian Roch, whose job talk I enthusiastically blogged about a few weeks back.

Really liked Terry’s post on cheap nonstandard analysis. I’ll add one linguistic comment. As Terry points out, you lose the law of the excluded middle in this context, and that means you have to be very careful about logical connectives:

Because of the lack of the law of excluded middle, though, sometimes one has to take some care in phrasing statements properly before they will transfer. For instance, the statement “If , then either or ” is of course true for standard reals, but not for nonstandard reals; a counterexample can be given for instance by and . However, the rephrasing “If and , then ” is true for nonstandard reals (why?). As a rough rule of thumb, as long as the logical connectives “or” and “not” are avoided, one can transfer standard statements to cheap nonstandard ones, but otherwise one may need to reformulate the statement first before transfer becomes possible.

I like to keep stuff like this straight by thinking of the cheap-nonstandard statement “x=0” as “I am certain that x=0.” Then it’s plainly wrong to say “If I’m certain that xy=0, then either I’m certain that x=0 or I’m certain that y=0.” On the other hand, “If I’m certain that x is nonzero and I’m certain that y is nonzero, I’m certain that xy is nonzero” is legit. This is of course in keeping with Terry’s analogy between nonstandard reals and random variables, which are also in some sense “those things which are like real numbers yet are not exactly real numbers, and about whose values we might want to express certainty or uncertainty.”

Update: I meant to add: an ultrafilter represents an agent who is certain about everything!