## Gendered conference campaign

A group of philosophers runs a gendered conference campaign, whose goal is to get conference organizers not to plan all-male rosters of speakers.

Does math need a campaign like this?

Discuss.

## Compressed sensing, compressed MDS, compressed clustering, and my talk tomorrow

I’m talking tomorrow at the spanking-new SILO seminar (“Systems, Learning, Information, and Optimization”) in the spanking-new Wisconsin Institute for Discovery.  As you may know, I don’t do any research in any of the four subjects appearing in the acronym!  But I’ve been following some work in the area ever since last year’s very successful Topology and Data seminar.  Here’s one semi-baked idea that will appear in the talk.

World’s most hurried introduction to compressed sensing (see Terry’s blog for a more thorough account.)

Think of an image on an N x N grid of pixels as a vector in R^(N^2).  We know that images can be compressed — that is, we can specify an image with reasonable accuracy using much less information.  That’s because most real-world images lie on or near a manifold in R^(N^2) of dimension much lower than N^2.

It’s already great that we can get away with fewer than N^2 coordinates to specify our image.  But it gets better still.  Maybe it’s expensive to record all N^2 pixels.  If we’re want to end up with (say) only N log N numbers, maybe we should try making only N log N measurements of our image in the first place?  If you just record a subset of N log N pixels, you’re probably in trouble; there may be localized features in the image that happen to be far from any of the pixels in your sparse set.  But if you measure N log N random linear functions on R^(N^2), it turns out that you’re very likely to have pinned down a single point on the manifold of interesting images!

Finite metric spaces

Suppose you’ve got a set S of objects, and a function d: S x S -> R, which you think of as a distance or dissimilarity function.  Could be texts, could be genomes, whatever.  An arbitrary finite metric space is a hard thing to get one’s head around, so it’s desirable to construct an embedding f: S -> R^k such that the Euclidean distance between f(x) and f(y) agrees as closely as possible with d(x,y).  (Think of k as fixed and small.)  The standard algorithm for this is called multidimensional scaling. It works well.  End of story.

Except — note that once again, we’ve compressed our data set.  It takes N^2 coordinates to specify a distance function, but only kN to specify an embedding.  So you can think of MDS as a compression algorithm for metric spaces!  And why does it work?  Because the set of N-point metrics that embed in R^k is a very low-dimensional submanifold of the N^2-dimensional space of general metrics.

So the example of compressed sensing leads us to ask — if you are willing to grant that your metric is approximately embeddable in R^k, can you reconstruct it from a sparse set of distances?  The answer is yes — you can do this with “Landmark MDS,” developed by Vin de Silva and Josh Tenenbaum, and, just as one might want, one can reconstruct the metric from a number of distances that’s linear in N.  I want to call this “Compressed MDS” — is that appropriate?  Will a randomly chosen set of cN distances specify the embedding with high probability?  Seems like it to me.

Multidimensional scaling is just one example of a metric embedding problem — you have a finite set S and a fixed metric space M (or, sometimes, a class of metrics from which M is to be drawn) and you want to find an embedding f: S -> M which respects distance as well as possible.  (This survey by Indyk and Matousek is a good resource for the general problem; and here’s a Newton Institute conference on metric embeddings from two weeks ago, with all the talks online for streaming.)  Of special interest is the case where M is required to be an ultrametric; to embed S in an ultrametric space is to identify S with the leaves of a weighted tree, which is more or less to impose a heirarchical clustering on S.  The space of ultrametrics on S is again of dimension much smaller than N^2, so, the heirarchical construction can be thought of as a compressed representation of the distance function d.  And again, one asks — can we reconstruct the ultrametric from a sparse set of distances?

Yes — a forthcoming paper of Eriksson, Dasarathy, Singh, and Nowak explains how.   N log N distances is enough.  (Update, Feb 2011:  now on the arXiv.)

Can one call this “compressed clustering”?

## Gowers & co. on the cap set problem

On the off chance you read my blog and not Gowers’ — Tim is talking about one of my very favorite open questions, the affine cap set problem, over at his place.

I’m a little ambivalent about reading his posts — every time I think about this problem, I get sucked in and spend a certain amount of time gnawing at it.  And the sum total of all this gnawing has so far produced not even the tiniest toothmark on the gleaming surface of the affine cap set problem.

And yet…. and yet…..

## Styxocracy

If you had five minutes to explain to a non-American friend how American political culture works, you’d do pretty well just to play them “Rockin the Paradise.”  (Starts at about 1:44 of the linked video.)

Just in case you’re in a sonic environment where you can’t play Styx, here are the lyrics:

Whatcha doin’ tonight?
Have you heard that the world’s gone crazy?
Young Americans listen when I say
There’s people puttin’ us down
I know they’re sayin’ that we’ve gone lazy
To tell you the truth we’ve all seen better days.

Don’t need no fast buck, lame duck, profits for fun
Quick trick plans, take the money and run
We need long term, slow burn, getting it done
And some straight talking, hard working son of a gun

So Whatcha doin’ tonight?
I got faith in our generation
Let’s stick together and futurize our attitudes
I ain’t lookin’ to fight
But I know with determination
We can challenge the schemers who cheat all the rules

Come on and take pride, be wise, spottin’ the fools
Big shots, crackpots bending the rules
A fair shot here for me and for you
Knowing that we can’t lose

Were Styx Republicans or Democrats?  That’s what’s great about “Rockin’ the Paradise” — you can’t tell!

## I do like the Beatles! I do!

What is with you and obscure bands, I don’t understand your obsession with nameless, obscure bands. You’re probably one of those guys who hate the Beatles because too many people like them, right ?

Actually, no.  When I was 11 the only two albums I owned were Beatles best-ofs, the Blue Album and the Red Album.  I thought the Beatles were the best band in the history of rock music.  And I still think so!

But the Beatles have armies of people to write about their greatness.  Who will blog about Sat. Nite Duets and Fatty Acids, or Yellow Ostrich, or Grammar, or Carsick Cars, or The Stevedores, if not me?  These people work hard on their records, they give them away free or almost free — why shouldn’t they get some tiny signal from the world outside that somebody liked the songs they wrote?

By the way, I went to the show to see Sat. Nite Duets but Fatty Acids turned out to be great too.  Here’s “Astrovan”:

I hope that was not annoying to majordomo.  In case it was, here’s “I am the Walrus.” CJ watches this video a lot and I’ve fallen in love with the song all over again.  How do they get so much stuff in there?

Good Lord, did you know Styx covered “I am the Walrus?”  I did not.  It is awful.

As it happens, I actually had a post about Styx in mind!  I’ll put it up now.  I hope it is mainstream enough for majordomo.

## A primer on mapping class groups

Benson Farb and Dan Margalit have just finished the final version of A Primer on Mapping Class Groups, to appear from Princeton University Press next year.  And it’s available in .pdf at Dan’s web page.  To the extent I know anything about mapping class groups, it’s because of this book!  Highly recommended for anyone interested in finding a way into this very active area of topology, which has heretofore not been so easy to learn about unless you have the luck to sit at the feet of a master.  Is it too much to say I expect the book to become “the Hartshorne of the mapping class group?”

## Revenge of the intersentence double space

Farhad Manjoo, in Slate, delivers a spirited attack on double-spacers like me.  Apparently the intersentence double space is irrational and unbeautiful.  My readers mostly disagree, and more importantly, so does LaTeX.  This argument reminds me of books I used to read as a kid about the forthcoming rationalization of English spelling.  The inventor of the Dewey Decimal System, Melville Dewey — excuse me, Melvil Dui — was a big player here.  Anyway, it didn’t happen.  Our lumpy, irrational language, with its silent gh and its intersentence lacunae, trundles on its own track.

Tagged , , ,

## Do mathematicians have “tiger mothers”? or: that’s funny, Norbert, you don’t look Chinese!

All the world — or at least all the world of parents of young kids — or at least all the world of educated parents of young kids who fret about their kids’ psychic and material well-being — is abuzz about Amy Chua’s article “Why Chinese Mothers are Superior,” which starts out:

A lot of people wonder how Chinese parents raise such stereotypically successful kids. They wonder what these parents do to produce so many math whizzes and music prodigies, what it’s like inside the family, and whether they could do it too. Well, I can tell them, because I’ve done it.

What follows is a cheerful recounting of Chua’s stern regimen with her daughters.  Here she is with 7-year-old Lulu, who was having trouble with a piano piece:

Back at the piano, Lulu made me pay. She punched, thrashed and kicked. She grabbed the music score and tore it to shreds. I taped the score back together and encased it in a plastic shield so that it could never be destroyed again. Then I hauled Lulu’s dollhouse to the car and told her I’d donate it to the Salvation Army piece by piece if she didn’t have “The Little White Donkey” perfect by the next day. When Lulu said, “I thought you were going to the Salvation Army, why are you still here?” I threatened her with no lunch, no dinner, no Christmas or Hanukkah presents, no birthday parties for two, three, four years. When she still kept playing it wrong, I told her she was purposely working herself into a frenzy because she was secretly afraid she couldn’t do it. I told her to stop being lazy, cowardly, self-indulgent and pathetic.

If a Chinese child gets a B—which would never happen—there would first be a screaming, hair-tearing explosion. The devastated Chinese mother would then get dozens, maybe hundreds of practice tests and work through them with her child for as long as it takes to get the grade up to an A.

As it happens, I was just reading Reuben Hersh and Vera John-Steiner’s enjoyable new book Loving and Hating Mathematics, so of course I was reminded of Norbert Wiener’s childhood recollections of being trained in mathematics by his father:

He would begin the discussion in an easy, conversational tone.  This lasted exactly until I made the first mathematical mistake.  Then the gentle and loving father was replaced by the avenger of blood.  The first warning he gave of my unconscious delinquency was a very sharp and aspirated “What?”…. My lessons ended in a family scene.  Father was raging.  I was weeping and my mother did her best to defend me, although hers was a losing battle.

And afterwards:

Wiener’s student Norman Levinson wrote of his teacher, “Even forty years later when he became depressed and would reminisce about this period, his eyes would fill with tears as he described his feelings of humiliation as he recited his lessons before his exacting father.  Fortunately he also saw his father as a very lovable man and he was aware of how much like his father he himself was.”

Ann Hulbert in today’s Slate has more on Chua as the latter-day Leo Wiener.

I tend to think that getting strong in mathematics requires devoting a lot of time to it.  Hours a day on average, just like piano.  I certainly did that — but not because my parents forced or threatened or tantrummed me into it.  Chua leads off by suggesting that her method tends to produce “math whizzes.”  Is it true?  It goes against all my experience of how mathematics works.  But readers, I am curious — did any of you learn math like this?  Feel free to respond anonymously — I recognize this survey requires more self-revelation than most.

(Also, never fear, we’re not considering giving CJ and AB this treatment.  I’ve lived in an apartment with thin walls where I had to listen to a kid practice piano four hours a day, and friends, nothing would make me go back to that.)

## JMM, Golsefidy, Silverman, Scanlon

Like Emmanuel, I spent part of last week at the Joint Meetings in New Orleans, thanks to a generous invitation from Alireza Salefi Golsefidy and Alex Lubotzky to speak in their special session on expander graphs.  I was happy that Alireza was willing to violate a slight taboo and speak in his own session, since I got to hear about his work with Varju, which caps off a year of spectacular progress on expansion in quotients of Zariski-dense subgroups of arithmetic groups.  Emmanuel’s Bourbaki talk is your go-to expose.

I think I’m unlike most mathematicians in that I really like these twenty-minute talks.  They’re like little bonbons — you enjoy one and then before you’ve even finished chewing you have the next in hand!  One nice bonbon was provided by Joe Silverman, who talked about his recent work on Lehmer’s conjecture for polynomials satisfying special congruences.  For instance, he shows that a polynomial which is congruent mod m to a multiple of a large cyclotomic polynomial can’t have a root of small height, unless that root is itself a root of unity.  He has a similar result where the implicit G_m is replaced by an elliptic curve, and one gets a lower bound for algebraic points on E which are congruent mod m to a lot of torsion points.  This result, to my eye, has the flavor of the work of Bombieri, Pila, and Heath-Brown on rational points.  Namely, it obeys the slogan:  Low-height rational points repel each other. More precisely — the global condition (low height) is in tension with a bunch of local conditions (p-adic closeness.)  This is the engine that drives the upper bounds in Bombieri-Pila and Heath-Brown:  if you have too many low-height points, there’s just not enough room for them to repel each other modulo every prime!

Anyway, in Silverman’s situation, the points are forced to nestle very close to torsion points — the lowest-height points of all!  So it seems quite natural that their own heights should be bounded away from 0 to some extent.  I wonder whether one can combine Silverman’s argument with an argument of the Bombieri-Pila-Heath-Brown type to get good bounds on the number of counterexamples to Lehmer’s conjecture….?

One piece of candy I didn’t get to try was Tom Scanlon’s Current Events Bulletin talk about the work of Pila and Willkie on problems of Manin-Mumford type.  Happily, he’s made the notes available and I read it on the plane home.  Tom gives a beautifully clear exposition of ideas that are rather alien to most number theorists, but which speak to issues of fundamental importance to us.  In particular, I now understand at last what “o-minimality” is, and how Pila’s work in this area grows naturally out of the Bombieri-Pila method mentioned above.  Highly recommended!