The median length of time people have owned their homes rose to 8.7 years in 2016, more than double what it had been 10 years earlier.

The accompanying chart shows that “median length of homeownership” used to hover at just under 4 years. That startled me! Doesn’t 4 years seem like a pretty short length of time to own a house?

When I thought about this a little more, I realized I had no idea what this meant. What is the “median length of homeownership” in 2017? Does it mean you go around asking each owner-occupant how long they’ve lived in their house, and take the median of those numbers? Probably not: when people were asked that in 2008, the median answer was 10 years, and whatever the Times was measuring was about 3.7 years in 2008.

Does it mean you look at all house sales in 2017, subtract the time since last sale, and take the median of *those* numbers?

Suppose half of all houses changed hands every year, and the other half changed hands every thirty years. Are the lengths of ownership we’re medianning half “one year” and half “30 years”, or “30/31 1 year” and 1/31 “30 years”?

There are about 75 million owner-occupied housing units in the US and 4-6 million homes sold per year, so the *mean* number of sales per unit per year is certainly way less than 1/4; of course, there’s no reason this mean should be close to the median of, well, whatever we’re taking the median of.

Basically I have no idea what’s being measured. The Times doesn’t link to the Moody’s Analytics study it’s citing, and Dougherty says that study’s not public. I did some Googling for “median length of homeownership” and as far as I can tell this isn’t a standard term of art with a consensus definition.

As papers run more data-heavy pieces I’d love to see a norm develop that there should be some way for the interested reader to figure out exactly what the numbers in the piece refer to. Doesn’t even have to be in the main text. Could be a linked sidebar. I know not everybody cares about this stuff. But I do!

]]>

Malle’s conjecture concerns the number of number fields with fixed Galois group and bounded discriminant, a question I’ve been interested in for many years now. We recall how it goes.

Let K be a global field — that is, a number field or the function field of a curve over a finite field. Any degree-n extension L/K (here L could be a field or just an etale algebra over K — hold that thought) gives you a homomorphism from Gal(K) to S_n, whose image we call, in a slight abuse of notation, the *Galois group* of L/K.

Let G be a transitive subgroup of S_n, and let N(G,K,X) be the number of degree-n extensions of K whose Galois group is G and whose discriminant has norm at most X. Every permutation g in G has an *index, *which is just n – the number of orbits of g. So the permutations of index 1 are the transpositions, those of index 2 are the three-cycles and the double-flips, etc. We denote by a(G) the reciprocal of the minimal index of any element of G. In particular, a(G) is at most 1, and is equal to 1 if and only if G contains a transposition.

(Wait, doesn’t a transitive subgroup of S_n with a transposition have to be the whole group? No, that’s only for *primitive* permutation groups. D_4 is a thing!)

Malle’s conjecture says that, for every , there are constants such that

We don’t know much about this. It’s easy for G = S_2. A theorem of Davenport-Heilbronn (K=Q) and Datskovsky-Wright (general case) proves it for G = S_3. Results of Bhargava handle S_4 and S_5, Wright proved it for abelian G. I kind of think this new theorem of Alex Smith implies for K=Q and every dihedral G of 2-power order? Anyway: we don’t know much. S_6? No idea. The best upper bounds for general n are still the ones I proved with Venkatesh a long time ago, and are very much weaker than what Malle predicts.

Malle’s conjecture fans will point out that this is only the *weak* form of Malle’s conjecture; the strong form doesn’t settle for an unspecified , but specifies an asymptotic . This conjecture has the slight defect that it’s wrong sometimes; my student Seyfi Turkelli wrote a nice paper which I think resolves this problem, but the revised version of the conjecture is a bit messy to state.

Anyway, here’s the new theorem:

**Theorem** (E-Tran-Westerland): Let G be a transitive subgroup of S_n. Then for all q sufficiently large relative to G, there is a constant such that

for all X>0.

In other words:

*The upper bound in the weak Malle conjecture is true for rational function fields.*

A few comments.

- We are still trying to fix the mistake in our 2012 paper about stable cohomology of Hurwitz spaces. Craig and I discussed what seemed like a promising strategy for this in the summer of 2015. It didn’t work. That result is still unproved. But the strategy developed into this paper, which proves a different and in some respects stronger theorem! So … keep trying to fix your mistakes, I guess? There might be payoffs you don’t expect.
- We can actually bound that is actually a power of log, but not the one predicted by Malle.
- Is there any chance of getting the strong Malle conjecture? No, and I’ll explain why. Consider the case G=S_4. Then a(G) = 1, and in this case the strong Malle’s conjecture predicts N(S_4,K,X) is on order X, not just X^{1+eps}. But our method doesn’t really distinguish between quartic fields and other kinds of quartic etale algebras. So it’s going to count all algebras L_1 x L_2, where L_1 and L_2 are quadratic fields with discriminants X_1 and X_2 respectively, with X_1 X_2 < X. We already know there’s approximately one quadratic field per discriminant, on average, so the number of such algebras is about the number of pairs (X_1, X_2) with X_1 X_2 < X, which is about X log X. So there’s no way around it: our method is only going to touch weak Malle. Note, by the way, that for quartic extensions, the strong Malle conjecture was proved by Bhargava, and he observes the same phenomenon:

…inherent in the zeta function is a sum over all etale extensions” of Q, including the “reducible” extensions that correspond to direct sums of quadratic extensions. These reducible quartic extensions far outnumber the irreducible ones; indeed, the number of reducible quartic extensions of absolute discriminant at most X is asymptotic to X log X, while we show that the number of quartic field extensions of absolute discriminant at most X is only O(X).

- I think there is, on the other hand, a chance of getting rid of the “q sufficiently large relative to G” condition and proving something for a fixed F_q(t) and all finite groups G.

OK, so how did we prove this?

Here’s a sketch. The starting point, as always, is that the G-extensions of F_q(t) are in bijection with G-covers of P^1/F_q, which are in turn in bijection with the F_q-rational points of a moduli space of G-covers, the *Hurwitz space*. In fact there is a whole sequence of Hurwitz spaces Hur_1, Hur_2, … where Hur_n is an n-dimensional variety parametrizing G-covers with n branch points.

Then the game is to control the cohomology of these Hurwitz spaces and thereby control its number of F_q-rational points via the Grothendieck-Lefschetz fixed point theorem.

A lot of our work with Akshay has been in proving *stable cohomology* for these spaces; that the cohomology group H^i(Hur_n) is eventually constant as n grows with i fixed.

But in this paper we do something softer! We show that the Betti numbers h^i(Hur_n) grow at most *polynomially* in n. This fact plus Grothendieck-Lefschetz allows us to show

for some d. The discriminant of these covers turns out to be on order q^{n/a}. So we end up with

which is what we wanted.

To get control of these numbers, we make use of a combinatorial description of the Hurwitz spaces. Let V be the vector space of dimension |G|-1 freely spanned by the nontrivial elements of G. Then it turns out V is not just any vector space; it’s a *braided vector space*, which means there’s a map

satisfying a certain relation; oh, let’s not write it down, I’ll just tell you what tau is in this case. V tensor V is spanned by pairs of elements (g,h), and we have

.

A braided vector space V is so-called because its nth tensor power V_n picks up an action of the n-strand braid group Br_n. (Note that Br_2 is Z so on V_2 we just have an invertible endomorphism, namely .) In this case, the braid group acts on n-tuples

.

by the usual Hurwitz moves, where “pull strand i past strand i+1” sends the above tuple to

.

It turns out that the Hurwitz space is a K(pi,1) corresponding to the action of the braid group on this finite set; in particular,

and so our problem of bounding Betti numbers comes down to a combinatorial problem in group cohomology.

Let W_n be the coinvariant space of V_n under S_n; you can think of this as being spanned by the orbits of the braid group on (G-0)^n. Then concatenation of tuples gives you a ring

And as mentioned above, the Hurwitz space is the finite cover of configuration space

.

This much Craig, Akshay and I already understood, and indeed the driver of our first paper was an argument in the homological algebra of R-modules.

But in this paper we invoke much deeper algebra, from a world that was totally new to me. You see, we should have known that you *never* want to crush a group action down to its orbits, or a braid group representation down to its coinvariants. That is morally wrong! As a modern categorical type of person you want to keep track of the whole shebang.

Now things are gonna get vague because it’s a long paper! But here goes. It turns out that to any braided vector space V you can attach a *quantum shuffle algebra* A(V). The ring R keeps track of H_0 of Hurwitz space, but it turns out that the cohomology of A(V) keeps track of *all* the cohomology of Hurwitz space:

.

(OK, OK, I forgot to mention that A(V) is graded so this Ext is bigraded, and that’s not actually exactly A(V) but a twisted version of it, etc. etc.)

There’s still a way to go from here: for instance, we actually mostly don’t use A(V) itself but a natural subalgebra called the Nichols algebra, which is good enough for our purposes. But maybe the main idea is this. Because we’re not actually trying to compute the dimensions of these Ext groups, only bound them above, we can get away with some really soft arguments. In particular, we don’t ever actually compute cohomology; we basically write down a minimal resolution of A(V) and then show that the graded pieces of the terms have polynomially growing ranks. The cohomology groups are subquotients of the terms, and that’s where we get our upper bounds!

OK I can’t resist one more weird comment. Did you notice that in our description of A(V) we never really used the group law on G? We only used that you could conjugate one element of G by another. So we’re not actually using that G is a group, only that it’s a *quandle*. The whole setup of our paper works for any quandle! I don’t know if it has any interesting topological meaning, though. Also — is the adjectival form for quandle “quandelic” or “quandular”?

]]>

]]>

For instance:

These iterative failures are, at a very deep level, the essence of creating new knowledge, and are therefore inseparable from the job. If you can’t imagine going to bed at the end of nearly every day with a nagging feeling that you could have done better, academe is not for you.

The academic workplace is a really unusual one. For instance, it’s one of the few places to work where you’re nobody’s boss and nobody’s your boss. It really suits some people — I’m one. But lots of other people feel otherwise: it’s too slow, too lacking in immediate feedback, too content with the way “it’s always been done.” And a lot of those people have great things to contribute to mathematics and don’t fit in the system we’ve set up to pay people to do math.

Also, this:

So while the ideal career path leads from graduate school to a tenure-track position, the one you will more likely find yourself on leads from graduate school to a series of short-term positions that will require you to move — often.

is less true in math than in many other areas, but still kind of true. And it works badly not just for people who temperamentally hate moving, but for people who want to have kids and have a limited childbearing window.

McCormack is right: without catastrophizing, we should always be trying to give our Ph.D. students as real a picture as possible of what academic life is like, and not just the advisor’s life with tenure at an R1 university. Lots of people will still happily sign up. But other people will think more seriously about other great ways to do mathematics.

]]>

“the condition of being queer and disabled isn’t the sum of the condition of being queer and the condition of being disabled, or even some linear combination of those, it’s just its own thing, which draws input from each of those conditions in some more complicated way and which has features of its own particular to the intersection”

it’s something I think most mathematicians would find extremely natural and intuitive.

]]>

GOP fans will say: “How can this be such a big disaster, crying liberals? Ten years ago there was no Obamacare, and people did fine.”

Some people did fine! Some people didn’t do fine.

You’ll hear people say, in the same sad snappish tone of voice, “Parents today are obsessed with safety, in my day kids rode in the way back of the station wagon, they didn’t wear seatbelts, they crossed the street by themselves, and they were fine.”

Some kids were fine! But just so you know: in 1975, about 1600 kids 13 and under were killed by cars as pedestrians, and another 1400 were killed in crashes while riding in cars. In 2015, those numbers were 186 and 663. Throw in teenagers and that’s another 8700 dead passengers in 1975; down to 2715 in 2015.

People did fine, except for the thousands of kids who got killed back then who wouldn’t get killed now.

A while ago I was reading the reunion book for the Harvard class of 1893, the people who graduated exactly 100 years before me. You know what you notice in their bios? A lot of people’s children died. In 1920, about 8% of American babies died before the age of 1. It’s now 0.6%.

People were fine! They had a baby, the baby died, they got on with their life.

But I like it better when babies hardly ever die, when thousands of children don’t get killed in car crashes, and when Americans have access to affordable health insurance even if they’ve been sick before. The past was fine. But it was also bad.

]]>

]]>

- “We have always prioritized fast and cheap over safety and privacy — maybe this time we can make better choices.”
- He briefly showed a demo where, given values of a polynomial, a machine can put together a few lines of code that successfully computes the polynomial. But the code looks
*weird*to a human eye. To compute some quadratic, it nests for-loops and adds things up in a funny way that ends up giving the right output. So has it really ”learned” the polynomial? I think in computer science, you typically feel you’ve learned a function if you can accurately predict its value on a given input. For an algebraist like me, a function determines but isn’t determined by the values it takes; to me, there’s something about that quadratic polynomial the machine has failed to grasp. I don’t think there’s a right or wrong answer here, just a cultural difference to be aware of. Relevant: Norvig’s description of “the two cultures” at the end of this long post on natural language processing (which is interesting all the way through!) - Norvig made the point that traditional computer programs are very modular, leading to a highly successful debugging tradition of zeroing in on the precise part of the program that is doing something wrong, then fixing that part. An algorithm or process developed by a machine, by contrast, may not have legible “parts”! If a neural net is screwing up when classifying something, there’s no meaningful way to say “this neuron is the problem, let’s fix it.” We’re dealing with highly non-modular complex systems which have evolved into a suboptimally functioning state, and you have to find a way to improve function which doesn’t involve taking the thing apart and replacing the broken component. Of course, we already have a large professional community that works on exactly this problem. They’re called therapists. And I wonder whether the future of debugging will look a lot more like clinical psychology than it does like contemporary software engineering.

]]>

The message contained a huge amount of information about a side of my family I’ve never known well. I’m still going through it all. But I wanted to share some of it while it was on my mind.

Here’s the manifest for the voyage of the S.S. Polonia, which left Danzig on September 17, 1923 and arrived in New York on October 1.

Owadje Ellenberg (always known as Owadia in my family) was my great-grandfather. He came to New York with his wife Sura-Fejga (known to us as Sara), Markus (Max), Etia-Race (Ethel), Leon (Leonard), Samuel and Bernard. Sara was seven months pregnant with my uncle Morris Ellenberg, the youngest child.

Owadje gives his occupation as “mason”; his son Max, only 17, was listed as “tailor.” They came from Stanislawow, Poland, which is now the city of Ivano-Frankivsk in Ukraine. On the immigration form you had to list a relative in your country of origin; Owadje listed his brother, Zacharja, who lived on Zosina Wola 6 in Stanislawow. None of the old street names have survived to the present, but looking at this old map of Stanislawow

it seems pretty clear Zosina Wola is the present day Yevhena Konoval’tsya Street. I have no way of knowing whether the numbering changed, but #6 Yevhena Konoval’tsya St. seems to be the setback building here:

So this is the best guess I have as to where my ancestors lived in the old country. The name Zosina Wola lives on only in the name of a bar a few blocks down Yevhena Konoval’tsya:

Owadje, now Owadia, files a declaration of intention to naturalize in 1934:

His signature is almost as bad as mine! By 1934 he’s living in Borough Park, Brooklyn, a plasterer. 5 foot 7 and 160lb; I think every subsequent Ellenberg man has been that size by the age of 15. Shtetl nutrition. There are two separate questions on this form, “color” and “race”: for color he puts white, for race he puts “Hebrew.” What did other Europeans put for race? He puts his hometown as Sopoff, which I think must be the modern Sopiv; my grandmother Sara was from Obertyn, quite close by. I guess they moved to the big city, Stanislowow, about 40 miles away, when they were pretty young; they got married there in 1902, when they were 21. The form says he previously filed a declaration of intention in 1926. What happened? Did he just not follow through, or was his naturalization rejected? Did he ever become a citizen? I don’t know.

Here’s what his house in Brooklyn looks like now:

Did you notice whose name was missing from the Polonia’s manifest? Ovadje’s oldest son, my grandfather, Julius. Except one thing I’ve learned from all this is that *I don’t actually know what my grandfather’s name was.* Julius is what we called him. But my dad says his passport says “Israel Ellenberg.” And his naturalization papers

have him as “Juda Ellenberg” (Juda being the Anglicization of Yehuda, his and my Hebrew name.) So didn’t that have to be his legal name? But how could that not be on his passport?

**Update: ** Cousin Phyllis came through for me! My grandfather legally changed his name to Julius on June 13, 1927, four months after he filed for naturalization. ** **

My grandfather was the first to come to America, in December 1920, and he came alone. He was 16. He managed to make enough money to bring the whole rest of the family in late 1923, which was a good thing because in May 1924 Calvin Coolidge signed the Johnson-Reed Act which clamped down on immigration by people thought to be debasing the American racial stock: among these were Italians, Chinese, Czechs, Spaniards, and Jews, definitely Jews.

Another thing I didn’t know: my grandfather lists his port of entry as Vanceboro, Maine. That’s not a seaport; it’s a small town on the Canadian border. So Julius/Juda/Israel must have sailed to Canada; this I never knew. Where would he have landed? Sounds like most Canadian immigrants landed at Quebec or Halifax, and Halifax makes much more sense if he entered the US at Vanceboro. But why did he sail to Canada instead of the US? And why did he leave from France (the form says “Montrese, France,” a place I can’t find) instead of Poland? (**Update:** My cousin comes through again: another record shows that Julius arrived on Dec 7, 1920 in St. John, New Brunswick, conveyed in 3rd class by the S.S. Corsican. Looks like this ship would have been coming from England, not France; I don’t know how to reconcile that.)

In 1927, when he naturalized, Julius lived at 83 2nd Avenue, a building built in 1900 at the boundary of the Bowery and the East Village. Here’s what it looks like now:

Not a lot of new immigrants able to afford rent there these days, I’m betting. Later he’d move to Long Beach, Long Island, where my father and his sisters grew up.

My first-cousin-once-removed-in-law went farther back, too, all the way back to Mojżesz Ellenberg, who was born sometime in the middle of the 18th century. The Hapsburg Empire required Jews to adopt surnames only in 1787; so Mojżesz could very well have been the first Ellenberg. You may be thinking he’s Owadia’s father’s father’s father, but no — Ellenberg was Owadia’s *mother’s* name. I was puzzled by this but actually it was common. What it meant is that Mordko Kasirer, Owadia’s father, didn’t want to pay the fee for a civil marriage — why should he, when he was already married to Rivka Ellenberg in the synagogue? But if you weren’t legally married, your children weren’t allowed to take their father’s surname. So be it. Mordko wasn’t gonna get ripped off by the system. Definitely my relative.

**Update:** Cousin Phyllis Rosner sends me my grandfather’s birth record. At birth in Poland he’s Izrael Juda Ellenberg. This still doesn’t answer what his legal name in the US was, but it explains the passport!

]]>

- Thanks (8 times)
- thanks (6 times)
- Yep (6 times)
- Yes (5 times)
- yep (5 times)
- Thanks so much (5 times)
- RT (5 times)
- I know right (4 times)

More detailed tweet analysis later.

]]>