How many books do I read in a year?

Data:

2006: 27

2007: 19

2008: 22

2009: 30

2010: 23

2011: 19

2012: 27

2013: 35

2014: 31

2015: 38

2016: 29

Don’t quite know what to make of this.  I’m sort of surprised there’s so much variation!  I’d have thought I’d have read less when my kids were infants, or when I was writing my own book, but it seems pretty random.   I do see that I’ve been clearly reading more books the last few years than I did in 2012 and before.

Lists, as always, are here (2011 on) and here (2006-2010.)

 

Lo!

“A naked man in a city street — the track of a horse in volcanic mud — the mystery of reindeer’s ears — a huge, black form, like a whale, in the sky, and it drips red drops as if attacked by celestial swordfishes — an appalling cherub appears in the sea —

Confusions.”

 

 

When random people give money to random other people

A post on Decision Science about a problem of Uri Wilensky‘s has been making the rounds:

Imagine a room full of 100 people with 100 dollars each. With every tick of the clock, every person with money gives a dollar to one randomly chosen other person. After some time progresses, how will the money be distributed?

People often expect the distribution to be close to uniform.  But this isn’t right; the simulations in the post show clearly that inequality of wealth rapidly appears and then persists (though each individual person bobs up and down from rich to poor.)  What’s going on?  Why would this utterly fair and random process generate winners and losers?

Here’s one way to think about it.  The possible states of the system are the sets of nonnegative integers (m_1, .. m_100) summing to 10,000; if you like, the lattice points inside a simplex.  (From now on, let’s write N for 100 because who cares if it’s 100?)

The process is a random walk on a graph G, whose vertices are these states and where two vertices are connected if you can get from one to the other by taking a dollar from one person and giving it to another.  We are asking:  when you run the random walk for a long time, where are you on this graph?  Well, we know what the stationary distribution for random walk on an undirected graph is; it gives each vertex a probability proportional to its degree.  On a regular graph, you get uniform distribution.

Our state graph G isn’t regular, but it almost is; most nodes have degree N, where by “most” I mean “about 1-1/e”; since the number of states is

N^2 + N - 1 \choose N-1

and, of these, the ones with degree N are exactly those in which nobody’s out of money; if each person has a dollar, the number of ways to distribute the remaining N^2 – N dollars is

N^2  - 1 \choose N-1

and so the proportion of states where someone’s out of money is about

\frac{(N^2 - 1)^N}{(N^2 + N - 1)^N} \sim (1-1/N)^N \sim 1/e.

So, apart from those states where somebody’s broke, in the long run every possible state is equally likely;  we are just as likely to see $9,901 in one person’s hands and everybody else with $1 as we are to see exact equidistribution again.

What is a random lattice point in this simplex like?  Good question!  An argument just like the one above shows that the probability nobody goes below $c is on order e^-c, at least when c is small relative to N; in other words, it’s highly likely that somebody’s very nearly out of money.

If X is the maximal amount of money held by any player, what’s the distribution of X?  I didn’t immediately see how to figure this out.  You might consider the continuous version, where you pick a point at random from the real simplex

(x_1, .. x_N) \in \mathbf{R}^N:   \sum x_i = N^2.

Equivalently; break a stick at N-1 randomly chosen points; what is the length of the longest piece?  This is a well-studied problem; the mean size of the longest piece is about N log N.  So I guess I think maybe that’s the expected value of the net worth of the richest player?

But it’s not obvious to me whether you can safely approximate the finite problem by its continuous limit (which corresponds to the case where we keep the number of players at N but reduce the step size so that each player can give each other a cent, or a picocent, or whatever.)

What happens if you give each of the N players just one dollar?  Now the uniformity really breaks down, because it’s incredibly unlikely that nobody’s broke.  The probability distribution on the set of (m_1, .. m_N) summing to N assigns each vector a probability proportional to the size of its support (i.e. the number of m_i that are nonzero.)  That must be a well-known distribution, right?  What does the corresponding distribution on partitions of N look like?

Update:  Kenny Easwaran points out that this is basically the same computation physicists do when they compute the Boltzmann distribution, which was new to me.

 

 

Tagged , ,

Driftless Father’s Day

This Father’s Day I found that, by some kind of unanticipated-gap-in-the-Red-Sea-level miracle, neither of my children had any events scheduled, so I gave myself a present and did something I’d been meaning to do for a year; take them to Dubuque.

It’s not far from Madison.  You drive southwest through the Driftless Zone, where the glaciers somehow looped around and missed a spot while they were grinding the rest of the Midwest flat.

At the exit to Platteville there was a sign for a “Mining Museum.”  We had about six seconds to decide whether we all wanted to go to a mining museum but that was plenty of time because obviously we all totally wanted to go to a mining museum.  And it was great!  Almost the platonic ideal of a small-town museum.  Our guide took us down into the old lead mine from the 1850s, now with electric lights and a lot of mannequins caught in the act of blasting holes in the rock.  (One of the mannequins was black; our guide told us that there were African-American miners in southwestern Wisconsin, but not that some of them were enslaved.)

This museum did a great job of conveying the working conditions of those miners; ankle-deep in water, darkness broken only by the candle wired to the front of their hat, the hammers on the rock so loud you couldn’t talk, and had to communicate by hand signals.  Riding up and down to the surface with one leg in the bucket and one leg out so more men could fit in one load, just hoping the bucket didn’t swing wrong and crush your leg against the rock wall.  There’s nothing like an industrial museum to remind you that everything you buy in a store has hours of difficult, dangerous labor built into it.  But it was also labor people traveled miles to get the chance to do!

Only twenty miles further to the Mississippi, my daughter’s first time seeing the river, and across it Dubuque.  Which has a pretty great Op-Art flag:

 

 

Our main goal was the National Mississippi River Museum; slick where the Platteville museum was homespun, up-to-date where the Plateville Museum was old-fashioned.  The kids really liked both.  I wanted fewer interactive screens, more actual weird river creatures.

The museum is on the Riverwalk; Dubuque, like just about every city on a body of water, is reinventing its shoreline as a tourist hub.  Every harbor a Harborplace.  OK, I snark, but it was a lovely walk; lots of handsome bridges in view, all different, an old-timey band playing in the gazebo, Illinois and Wisconsin and Iowa invisibly meeting across the water….

Only disappointment of the afternoon; the famous funicular railway was closed.  Maybe they could have posted that on their website or something.  But in a way it’s good they didn’t; if I’d known it was closed, I probably would have decided to put off the trip, and who knows if we’d ever have gone?

On the way back we stopped in Dickeyville to get gas but missed the Dickeyville Grotto; would have stopped there for sure if I’d known about it.  Dinner in Dodgeville at Culver’s, the Midwest’s superior version of In-N-Out, where I got my free Father’s Day turtle.   I like cheese curds and brats as much as the next guy, but I gotta say, I think the turtle is my favorite of the many foods I’d never heard of before I moved to Wisconsin.

 

 

Tagged , , , ,

What I ate in Toronto

A hamburger, black sesame gelato, breakfast (incl. baked beans) at a diner from the 1940s, hand-pulled noodles, poutine.

Update:  “Related posts” reminds me that the last time I went to a conference in Toronto, I learned a lot of interesting math from Julia Wolf, and the same was true this time!

Tagged ,

Rational points on solvable curves over Q via non-abelian Chabauty (with Daniel Hast)

New paper up!  With my Ph.D. student Daniel Hast (last seen on the blog here.)

We prove that hyperelliptic curves over Q of genus at least 2 have only finitely many rational points.  Actually, we prove this for a more general class of high-genus curves over Q, including all solvable covers of P^1.

But wait, don’t we already know that, by Faltings?  Of course we do.  So the point of the paper is to show that you can get this finiteness in a different way, via the non-abelian Chabauty method pioneered by Kim.  And I think it seems possible in principle to get Faltings for all curves over Q this way; though I don’t know how to do it.

Continue reading

Tagged , , , , , , ,

Multiple height zeta functions?

Idle speculation ensues.

Let X be a projective variety over a global field K, which is Fano — that is, its anticanonical bundle is ample.  Then we expect, and in lots of cases know, that X has lots of rational points over K.  We can put these points together into a height zeta function

\zeta_X(s) = \sum_{x \in X(K)} H(x)^{-s}

where H(x) is the height of x with respect to the given projective embedding.  The height zeta function organizes information about the distribution of the rational points of X, and which in favorable circumstances (e.g. if X is a homogeneous space) has the handsome analytic properties we have come to expect from something called a zeta function.  (Nice survey by Chambert-Loir.)

What if X is a variety with two (or more) natural ample line bundles, e.g. a variety that sits inside P^m x P^n?  Then there are two natural height functions H_1 and H_2 on X(K), and we can form a “multiple height zeta function”

\zeta_X(s,t) = \sum_{x \in X(K)} H_1(x)^{-s} H_2(x)^{-t}

There is a whole story of “multiple Dirichlet series” which studies functions like

\sum_{m,n} (\frac{m}{n}) m^{-s} n^{-t}

where (\frac{m}{n}) denotes the Legendre symbol.  These often have interesting analytic properties that you wouldn’t see if you fixed one variable and let the other move; for instance, they sometimes have finite groups of functional equations that commingle the s and the t!

So I just wonder:  are there situations where the multiple height zeta function is an “analytically interesting” multiple Dirichlet series?

Here’s a case to consider:  what if X is the subvariety of P^2 x P^2 cut out by the equation

x_0 y_0 + x_1 y_1 + x_2 y_2 = 0?

This has something to do with Eisenstein series on GL_3 but I am a bit confused about what exactly to say.

Tagged , , , ,

Elif Batuman, “The Idiot”

What a novel!  The best I’ve read in quite a while.


One thing I like:  the way this book takes what’s become a standard bundle of complaints against “literary fiction”:

It’s about overprivileged people with boring lives.  Too much writing about writing, and too much writing about college campuses, and worst of all, too much writing about writers on college campuses.   Nothing really happens.  You’re expected to accept minor alterations of feelings in lieu of plot.  

and gleefully makes itself guilty of all of them, while being nevertheless rich in life and incident, hilarious, stirring, and of its time.


Maybe “hilarious” isn’t quite the right word for the way this book is funny, very very funny.  It’s like this:

“Ralph!” I exclaimed, realizing that he was this guy I knew, Ralph.

Whether you find this funny is probably a good test for whether The Idiot is gonna be your thing.


Given this, it’s slightly startling to me that Batuman wrote this essay in n+1, which endorses the standard critique, and in particular the claim that fiction has been pressed into a bloodless sameness by the creative writing workshop.  They bear, as she puts it, “the ghastly imprimatur of the fiction factory.”

What kind of writing bears this stamp?

Guilt leads to the idea that all writing is self-indulgence. Writers, feeling guilty for not doing real work, that mysterious activity—where is it? On Wall Street, at Sloane-Kettering, in Sudan?—turn in shame to the notion of writing as “craft.” (If art is aristocratic, decadent, egotistical, self-indulgent, then craft is useful, humble, ascetic, anorexic—a form of whittling.) “Craft” solicits from them constipated “vignettes”—as if to say: “Well, yes, it’s bad, but at least there isn’t too much of it.” As if writing well consisted of overcoming human weakness and bad habits. As if writers became writers by omitting needless words.

So what’s weird is that Batuman’s writing is exactly the kind that the creative writing workshop leaps to its feet and applauds.  OK, there’s no leaping in creative writing workshop.  It would murmur appreciatively.  Her sentences are pretty damn whittled.  Also clever.  Scenes don’t overspill, they end just before the end.  Batuman’s writing is both crafted and crafty — but not anorexic!  Anorexia isn’t denying yourself what’s needless; it’s a hypertrophy of that impulse, its extension to a more general refusal.

Batuman is really excellent on the convention of the literary short story cold open, which is required to be:

in-your-face in medias res, a maze of names, subordinate clauses, and minor collisions: “The morning after her granddaughter’s frantic phone call, Lorraine skipped her usual coffee session at the Limestone Diner and drove out to the accident scene instead.”  …. A first line like “Lorraine skipped her usual coffee session at the Limestone Diner” is supposed to create the illusion that the reader already knows Lorraine, knows about her usual coffee, and, thus, cares why Lorraine has violated her routine. It’s like a confidence man who rushes up and claps you on the shoulder, trying to make you think you already know him.

Her paradigmatic offender here is the first line of Michael Chabon’s The Amazing Adventures of Kavalier & Clay:

In later years, holding forth to an interviewer or to an audience of aging fans at a comic book convention, Sam Clay liked to declare, apropos of his and Joe Kavalier’s greatest creation, that back when he was a boy, sealed and hog-tied inside the airtight vessel known as Brooklyn New York, he had been haunted by dreams of Harry Houdini.

about which she says:

All the elements are there: the nicknames, the clauses, the five w’s, the physical imprisonment, the nostalgia. (As if a fictional character could have a “greatest creation” by the first sentence—as if he were already entitled to be “holding forth” to “fans.”)

To me this all starts with One Hundred Years of Solitude, which all of us writers read the hell out of in high school, right?  Surely Batuman too?  No kid can read

Many years later, as he faced the firing squad, Colonel Aureliano Buendia was to remember that distant afternoon when his father took him to discover ice.

and not say, oh, that’s how you do it.

Anyway, I’m mostly with Batuman here; once she shows you how it works, the trick is a little corny.  Maybe I already knew this?  Maybe this is why I always preferred the first line of, and for that matter all of, Chabon’s The Mysteries of Pittsburgh to Kavalier & Clay.  Here’s the opening:

At the beginning of summer I had lunch with my father, the gangster, who was in town for the weekend to transact some of his vague business.

In medias res, yes — but not so overstuffed, just one piece of information (the gangster!) presented to start with.  No names.  The word “transact” — boy, there’s nothing I like more than a perfect placement of a boring word.  I think it’s a lot like the first line of The Idiot:

“I didn’t know what email was until I got to college.”

Except Chabon focuses on rhyme (summer-father-gangster) while Batuman is all scansion — perfect trochees!

 


Of course there are a lot of reasons I’m predisposed to like this.  It’s about bookish, ambitious, romantically confused Harvard undergrads, which Batuman and I both were.  There are a lot of jokes in it.  There are some math scenes.

There’s even a biographical overlap:  Batuman, wrote her college novel right after college, just like I did.  And then she finished her Ph.D. and put the manuscript in a drawer for a long time, just like I did.  (I don’t know if she carried out the intermediate step, as I did, of getting the book rejected by every big commercial house in New York.)  And then at some point in the run-up to middle age she looked at those pages again and said words to the effect of “This is not actually that bad…”

So let me say it straight; The Idiot makes me think about the alternate universe where I stayed a novelist instead of going back to grad school in math, a universe where I spent years working really hard to sharpen and strengthen the work I was doing.  This is the kind of novel I would have been aiming my ambition at writing; and I still wouldn’t have done it this well.  The existence of The Idiot releases me from any regrets.

(I don’t have many.  Math, for me, is fun.  Writing fiction is not.)

 

 

 

 

Tagged , , ,

What is the median length of homeownership?

Well, it’s longer than it used to be, per Conor Dougherty in the New York Times:

The median length of time people have owned their homes rose to 8.7 years in 2016, more than double what it had been 10 years earlier.

The accompanying chart shows that “median length of homeownership” used to hover at  just under 4 years.  That startled me!  Doesn’t 4 years seem like a pretty short length of time to own a house?

When I thought about this a little more, I realized I had no idea what this meant.  What is the “median length of homeownership” in 2017?  Does it mean you go around asking each owner-occupant how long they’ve lived in their house, and take the median of those numbers?  Probably not:  when people were asked that in 2008, the median answer was 10 years, and whatever the Times was measuring was about 3.7 years in 2008.

Does it mean you look at all house sales in 2017, subtract the time since last sale, and take the median of those numbers?

Suppose half of all houses changed hands every year, and the other half changed hands every thirty years.  Are the lengths of ownership we’re medianning half “one year” and half “30 years”, or “30/31 1 year” and 1/31 “30 years”?

There are about 75 million owner-occupied housing units in the US and 4-6 million homes sold per year, so the mean number of sales per unit per year is certainly way less than 1/4; of course, there’s no reason this mean should be close to the median of, well, whatever we’re taking the median of.

Basically I have no idea what’s being measured.  The Times doesn’t link to the Moody’s Analytics study it’s citing, and Dougherty says that study’s not public.  I did some Googling for “median length of homeownership” and as far as I can tell this isn’t a standard term of art with a consensus definition.

As papers run more data-heavy pieces I’d love to see a norm develop that there should be some way for the interested reader to figure out exactly what the numbers in the piece refer to.  Doesn’t even have to be in the main text.  Could be a linked sidebar.  I know not everybody cares about this stuff.  But I do!

 

 

 

Tagged , , ,

Fox-Neuwirth-Fuks cells, quantum shuffle algebras, and Malle’s conjecture for function fields

I’ve gotten behind on blogging about preprints!  Let me tell you about a new one I’m really happy with, joint with TriThang Tran and Craig Westerland, which we posted a few months ago.

Malle’s conjecture concerns the number of number fields with fixed Galois group and bounded discriminant, a question I’ve been interested in for many years now.  We recall how it goes.

Let K be a global field — that is, a number field or the function field of a curve over a finite field.  Any degree-n extension L/K (here L could be a field or just an etale algebra over K — hold that thought) gives you a homomorphism from Gal(K) to S_n, whose image we call, in a slight abuse of notation, the Galois group of L/K.

Let G be a transitive subgroup of S_n, and let N(G,K,X) be the number of degree-n extensions of K whose Galois group is G and whose discriminant has norm at most X.  Every permutation g in G has an index, which is just n – the number of orbits of g.  So the permutations of index 1 are the transpositions, those of index 2 are the three-cycles and the double-flips, etc.  We denote by a(G) the reciprocal of the minimal index of any element of G.  In particular, a(G) is at most 1, and is equal to 1 if and only if G contains a transposition.

(Wait, doesn’t a transitive subgroup of S_n with a transposition have to be the whole group?  No, that’s only for primitive permutation groups.  D_4 is a thing!)

Malle’s conjecture says that, for every \epsilon > 0, there are constants c,c_\epsilon such that

c X^{a(G)} < N(G,K,X) < c_\epsilon X^{a(G)+\epsilon}

We don’t know much about this.  It’s easy for G = S_2.  A theorem of Davenport-Heilbronn (K=Q) and Datskovsky-Wright (general case) proves it for G = S_3.  Results of Bhargava handle S_4 and S_5, Wright proved it for abelian G.  I kind of think this new theorem of Alex Smith implies for K=Q and every dihedral G of 2-power order?  Anyway:  we don’t know much.  S_6?  No idea.  The best upper bounds for general n are still the ones I proved with Venkatesh a long time ago, and are very much weaker than what Malle predicts.

Malle’s conjecture fans will point out that this is only the weak form of Malle’s conjecture; the strong form doesn’t settle for an unspecified X^\epsilon, but specifies an asymptotic X^a (log X)^b.   This conjecture has the slight defect that it’s wrong sometimes; my student Seyfi Turkelli wrote a nice paper which I think resolves this problem, but the revised version of the conjecture is a bit messy to state.

Anyway, here’s the new theorem:

Theorem (E-Tran-Westerland):  Let G be a transitive subgroup of S_n.  Then for all q sufficiently large relative to G, there is a constant c_\epsilon such that

N(G,\mathbf{F}_q(t),X) < c_\epsilon X^{a(G)+\epsilon}

for all X>0.

In other words:

The upper bound in the weak Malle conjecture is true for rational function fields.

A few comments.

  1.  We are still trying to fix the mistake in our 2012 paper about stable cohomology of Hurwitz spaces.  Craig and I discussed what seemed like a promising strategy for this in the summer of 2015.  It didn’t work.  That result is still unproved.  But the strategy developed into this paper, which proves a different and in some respects stronger theorem!  So … keep trying to fix your mistakes, I guess?  There might be payoffs you don’t expect.
  2. We can actually bound that X^\epsilon is actually a power of log, but not the one predicted by Malle.
  3. Is there any chance of getting the strong Malle conjecture?  No, and I’ll explain why.  Consider the case G=S_4.  Then a(G) = 1, and in this case the strong Malle’s conjecture predicts N(S_4,K,X) is on order X, not just X^{1+eps}.   But our method doesn’t really distinguish between quartic fields and other kinds of quartic etale algebras.  So it’s going to count all algebras L_1 x L_2, where L_1 and L_2 are quadratic fields with discriminants X_1 and X_2 respectively, with X_1 X_2 < X.  We already know there’s approximately one quadratic field per discriminant, on average, so the number of such algebras is about the number of pairs (X_1, X_2) with X_1 X_2 < X, which is about X log X.  So there’s no way around it:  our method is only going to touch weak Malle.  Note, by the way, that for quartic extensions, the strong Malle conjecture was proved by Bhargava, and he observes the same phenomenon:

    …inherent in the zeta function is a sum over all etale extensions” of Q, including the “reducible” extensions that correspond to direct sums of quadratic extensions. These reducible quartic extensions far outnumber the irreducible ones; indeed, the number of reducible quartic extensions of absolute discriminant at most X is asymptotic to X log X, while we show that the number of quartic field extensions of absolute discriminant at most X is only O(X).

  4.  I think there is, on the other hand, a chance of getting rid of the “q sufficiently large relative to G” condition and proving something for a fixed F_q(t) and all finite groups G.

 

OK, so how did we prove this?

Continue reading

Tagged , , , , , ,
%d bloggers like this: