I have occasionally worked to lose weight, never too seriously because my weight problem has never been too serious. I used to sometimes do the Scarsdale diet in sync with my dad and once, a few years back, I went six weeks without carbs.

Anyway, a month without restaurant food has gone by and I’m 13 pounds lighter. Even though I’m eating all the cakes and cookies the kids are baking, snacking at night, going through enormous amounts of eggs, doing everything wrong. I looked up the records from doctor’s appointments and this is the least I’ve weighed since 2011. Who knew all it took was an order from the Governor to stay at home and make my own food?

When you plot the number of reported deaths from COVID on a log scale you get pictures that look like this one, by John Burn-Murdoch at the Financial Times:

A straight line represents exponential growth, which is what one might expect to see in the early days of a pandemic according to baby models. You’ll note that the straight line doesn’t last very long, thank goodness; in just about every country the line starts to bend. Why are COVID deaths concave? There are quite a few possible reasons.

Suppression is working. When pandemic breaks out, countries take measures to suppress transmission, and people take their own measures over and above what their governments do. (An analysis by Song Gao of our geography department of cellphone location data shows that Wisconsinites median distance traveled from home decreased by 50% even before the governor issued a stay-at-home order.) That should slow the rate of exponential growth — hopefully, flip it to exponential decay.

Change in reporting. Maybe we’re getting better at detecting COVID deaths; if on day 1, only half of COVID deaths were reported as same, while now we’re accurately reporting them all, we’d see a spuriously high slope at the beginning of the outbreak. (The same reasoning applies to the curve for number of confirmed cases; at the beginning, the curve grows faster than the true number of infections as testing ramps up.)

COVID is getting less lethal. This is the whole point of “flattening the curve” — with each week that passes, hospitals are more ready, we have more treatment options and fuller knowledge of which of the existing treatments best suits which cases.

Infection has saturated the population. This is the most controversial one. The baby model (where by baby I mean SIR) tells you that the curve bends as the number of still-susceptible people starts to really drop. The consensus seems to be we’re nowhere near that yet, and almost everyone (in the United States, at least) is still susceptible. But I guess one should be open to the possibility that there are way more asymptomatic people than we think and half the population is already infected; or that for some reason a large proportion of the population carries natural immunity so 1% of population infected is half the susceptible population.

Heterogeneous growth rate. I came across this idea in a post by a physicist (yeah, I know, but it was a good post!) which I can’t find now — sorry, anonymous physicist! There’s not one true exponential growth rate; different places may have different slopes. Just for the sake of argument, suppose a bunch of different locales all start with the same number of deaths, and suppose the rate of exponential growth is uniformly distributed between 0 and 1; then the total deaths at time t is which is . The log of that function has positive second derivative; that is, it tends to make the curve bend up rather than down! That makes sense; with heterogeneous rates of exponential growth, you’ll start with some sort of average of the rates but before long the highest rate will dominate.

I’m sure I’ve skipped some curve-bending factors; propose more in comments!

Made a big, creamy, cheesy casserole with rotini and a million artichokes and peas, the vegetables out of the freezer of course. Times like this bring out the 60s housewife in me. Everyone is saying it’s good to get out of the house and see the sun from time to time, even just on your porch, but there hasn’t really been any sun here; it’s Wisconsin-technically-spring, in the 40s and kind of dreary. I go play basketball with the kids in the driveway each day in the chill. CJ can beat me almost all the time now.

AB and I listened to all the songs on Spotify called “Coronavirus.” There are already a ton; we didn’t actually listen to all of them, there were too many. A lot of them are in Spanish.

Daniel Litt organized a number theory conference, all held on Zoom with more than 130 people watching. To my surprise, this worked really well. People are starting to organize lists of online seminars and at this point there are more seminars I could be “going” to each day than there are when life is normal.

I’ve heard talk about starting baseball with the All-Star Game and having the World Series at Christmas.

Some people are hoping that maybe we’re drastically underestimating the prevalence of infection; maybe the reason curves are starting to bend isn’t the effect of our social isolation measures but the fact that a substantial population has already been affected and acquired temporary immunity, without ever knowing they were sick, and so maybe we’re vastly overestimating the proportion of cases which turn into serious illnesses. Wouldn’t that be great?

At the moment I don’t know anyone who’s died but I know people who know people who’ve died. At this point, do most people in the United States know people who know people who’ve died?

Talking to AB about multiplying rational numbers. She understands the commutativity of multiplication of integers perfectly well. But I had forgotten that commutativity in the rational setting is actually conceptually harder! That four sixes is six fours you can conceptualize by thinking of a rectangular array, or something equivalent to that. But the fact that seven halves is the same thing as seven divided by two doesn’t seem as “natural” to her. (Is that even an instance of commutativity? I think of the first as 7 x 1/2 and the second as 1/2 x 7.)

G. Elliot Morris posted this embedding of the current Democratic presidential candidates in R^2 on Twitter:

where the edge weights (and thus the embeddings) derive from YouGov data, which for each pair of candidates (i,j) tell you which proportion of voters who report they’re considering candidate i also tell you they’re considering candidate j.

Of course, this matrix is non-symmetric, which makes me wonder exactly how he derived distances from it. I also think his picture looks a little weird; Sanders and Bloomberg are quite ideologically distinct, and their coconsiderers few in number, but they end up neighbors in his embedding.

Here was my thought about how one might try to produce an embedding using the matrix above. Model voter ideology as a standard Gaussian f in R^2 (I know, I know…) and suppose each candidate is a point y in R^2. You can model propensity to consider y as a standard Gaussian centered at y, so that the number of voters who are considering candidate y is proportional to the integral

and the voters who are considering candidate z to

So the proportions in Morris’s table can be estimated by the ratio of the second integral to the first, which, if I computed it right (be very unsure about the constants) is

.

(The reason this is doable in closed form is that the product of Gaussian probability density functions is just exp(-Q) for some other quadratic form, and we know how to integrate those.) In other words, the candidate y most likely to be considered by voters considering z is one who’s just like z but half as extreme. I think this is probably an artifact of the Gaussian I’m using, which doesn’t, for instance, really capture a scenario where there are multiple distinct clusters of voters; it posits a kind of center where ideological density is highest. Anyway, you can still try to find 8 points in R^2 making the function above approximate Morris’s numbers as closely as possible. I didn’t do this in a smart optimization way, I just initialized with random numbers and let it walk around randomly to improve the error until it stopped improving. I ended up here:

which agrees with Morris that Gabbard is way out there, that among the non-Gabbard candidates, Steyer and Klobuchar are hanging out there as vertices of the convex hull, and that Warren is reasonably central. But I think this picture more appropriately separates Bloomberg from Sanders.

How would you turn the coconsideration numbers into an R^2 embedding?

A popular political quiz on the internet purports to place you on a Cartesian plane with “left-right” on one axis and “libertarian-communitarian” on the other, by presenting you with 36 assertions you’re suppposed to agree or disagree with. One of them is

“There are too many wasteful government programs.”

Well, of course there are! For this not to be the case, the government would have to be uniquely unwasteful among all large institutions. The quiz does not ask whether you agree that

“There are too many wasteful private enterprises.”

I would like to agree with both, but the test only allows me to agree with the first while remaining silent above the second, which makes me seem more of a free-market purist than I really am. Which questions you choose to ask affects which answers you’re able to get.

A list of a hundred foods that went around the Internet a while back; the original source seems to be gone. Your score is how many you’ve eaten. Here’s how the family did:

Me: 68

Dr. Mrs. Q: 41

CJ: 38

AB: 36

I did pretty well except for the alcohol, which makes sense; I’m always inclined to try a food I haven’t eaten before, but the opportunity to try a new drink doesn’t move me at all. I wonder how many of those 68 I’ve only eaten once? Just glancing at the list again, I see: fugu, haggis, Jamaican Blue Mountain coffee, whole insects, horse. I have only eaten at a three Michelin-star restaurant once and the menu was selected for me so I counted that as “tasting menu at a three-star Michelin,” so that too.

The format reminds me of the “Purity Test” that was a mainstay of Usenet groups and, I’m pretty sure, FIDONet before that. Wikipedia suggests the version of the test I saw, like everything else weird on the early Internet, originated at MIT.

Indeed this series diverges, just as the tweeter says: there’s a positive-density subset of n such that the summand exceeds .

More subtle: what about

This should still diverge. Argument: the probability that a real number x chosen uniformly from a large interval has is on order , not ; so there will be a subset of integers with density on order where the summand exceeds , and summing over those integers along should give a sum on order , which can be made as large as you like by bringing close to 0.

“And during the time when the Hungarian or Polish Jewish youngster was brought to a level where he could understand the Prophets, and listen to rigorous biblical and legal studies, the American youngster is merely brought to the magnificent level of being able to stammer a few words of English-style Hebrew, to pronounce the blessing over the Torah, and to chant half the maftir from a text with vowels and notes on the day he turns thirteen — a day that is celebrated here as the greatest of holidays among our Jewish brethren. From that day onward a youngster considers his teacher to be an unwanted article.”

Moses Weinberger, Jews and Judaism in New York, 1887 (Jonathan D. Sarna, trans.)