Tag Archives: graphons

Pandemic blog 20: R_0, random graphs, and the L_2 norm

People are talking about R_0. It’s the number that you wish were below 1. Namely: it is the number of people, on average, that a carrier of SARS-CoV-2 infects during the course of their illness. All the interventions we’re undertaking are designed to shrink that number. Because if it’s bigger than 1, the epidemic grows exponentially until it infects a substantial chunk of the population, and if it’s smaller, the epidemic dies out.

But not everybody has the same R_0! According to some of the epdemiologists writing about COVID, this matters. It matters, for instance, to the question of how far into the population the infection gets before it starts to burn itself out for lack of new susceptible hosts (“herd immunity”) and to the question of how much of the population eventually falls ill.

Here’s an easy way to see that heterogeneity can make a difference. Suppose your population consists of two towns of the same size with no intermunicipal intercourse whatsoever. The first town has an R_0 of 1.5 and the second an R_0 of 0.3. Then the mean R_0 of the whole population is 0.9. But this epidemic doesn’t die out; it spreads to cover much of the population of Contagiousville.

I learned an interesting way to think about this heuristically from Tim Gowers on Twitter:

You can read Tim’s interesting thread yourself, but here’s the main idea. Say your population has size N. You make a graph out of the pandemic by placing an edge between vertices i and j if one of the corresponding people infects the other. (Probably better to set this up in a directed way, but I didn’t.) Or maybe slightly better to say: you place an edge if person i and person j interact in a manner such that, were either to enter the interaction infected, both would leave that way. If one person in this graph gets infected, the reach of the infection is the connected component of the corresponding vertex. So how big is that component?

The simplest way to set this up is to connect each pair of vertices with probability c/n, all such choices made independently. This is an Erdos-Renyi random graph. And the component structure of this graph has a beautiful well-known theory; if c > 1, there is a giant component which makes up a positive proportion of the vertices, and all other components are very small. The size of this component is nx, where x is the unique positive number such that

x = 1 - e^{-cx}.

If c < 1, on the other hand, there is no big component, so the pandemic is unlikely to reach much of the population. (Correspondingly, the equation above has no nonzero solution.)

It is fair to be skeptical of this model, which is completely static and doesn’t do anything fancy, but let me just say this — the most basic dynamic model of epidemic spread, the SIR model, has an endstate where the proportion of the population that’s been infected is the unique positive x such that

x = 1 - e^{-R_0x}.

Which looks pretty familiar!

Now what happens if you want to take into account that the population isn’t actually an undifferentiated mass? Let’s say, for instance, that your county has a red town and a blue town, each with population n/2. Two people in the red town have a probability of 2/n of being connected, while two people in the blue town have a probability of just 1/n of being connected, and a red-blue pair is connected with probability 1/n. (This kind of random graph is called a stochastic block model, if you want to go look at papers about it.) So the typical red-town person is going to infect 1 fellow red-towner and 0.5 blue-towners, for an R_0 of 1.5, while the blue-towner is going to have an R_0 of 1.

Here’s the heuristic for figuring out the size of the big component. Suppose x is the proportion of the red town in the big component of the graph, and y is the proportion of the blue town in the big component. Take a random red person; what’s the change they’re in the big component? Well, the chance they’re not connected to any of the xn/2 red-towners in the big component is

(1-2/n)^{xn/2} = e^{-1}

(oh yeah did I mention that n was infinity?) and the chance that they’re not connected to any of the blue-towners in the big component is

(1-1/n)^{yn/2} = e^{-(1/2)y}

so all in all you get

x = 1 - e^{-(x + (1/2)y}

and by the same token you would get

y = 1-e^{-((1/2)x + (1/2)y)}

and now you have two equations that you can solve for x and y! In fact, you find x = 47% and y = 33%. So just as you might expect, the disease gets farther in the spreadier town.

And you might notice that what we’re doing is just matrix algebra! If you think of (x,y) as a vector v, we are solving

v = \mathbf{1} - e^{-Av}

where “exponentiation” of a vector is interpreted coordinatewise. You can think of this as finding a fixed point of a nonlinear operator on vectors.

When does the outbreak spread to cover a positive proportion of the population? There’s a beautiful theorem of Bollobas, Janssen, and Riordan that tells you: you get a big component exactly when the largest eigenvalue λ of A, the so-called Perron-Frobenius eigenvalue, is larger than 1. In the case of the matrix studied above, the two eigenvalues are about 1.31 and 0.19. You might also note that in the early stages of the epidemic, when almost everyone in the network is susceptible, the spread in each town will be governed by repeated multiplication of a small vector by A, and the exponential rate of growth is thus also going to be given by λ.

It would be cool if the big eigenvalue also told you what proportion of the vertices are in the giant component, but that’s too much to ask for. For instance, we could just replace A with a diagonal matrix with 1.31 and 0.19 on the diagonal; then the first town gets 43% infected and the second town completely disconnected from the first, gets 0.

What is the relationship between the Perron-Frobenius eigenvalue and the usual “mean R_0” definition? The eigenvalue can be thought of as

\max_{v} v^T A v / v^T v

while the average R_0 is exactly

\mathbf{1}^T A \mathbf{1} / n

where 1 is the all-ones vector. So we see immediately that λ is bounded below by the average R_0, but it really can be bigger; indeed, this is just what we see in the two-separated-towns example we started with, where R_0 is smaller than 1 but λ is larger.

I don’t see how to work out any concise description of the size of the giant component in terms of the symmetric matrix, even in the simplest cases. As we’ve seen, it’s not just a function of λ. The very simplest case might be that where A has rank 1; in other words, you have some division of the population into equal sized boxes, and each box has its own R_0, and then the graph is constructed in a way that is “Erdos-Renyi but with a constraint on degrees” — I think there are various ways to do this but the upshot is that the matrix A is rank 1 and its (i,j) entry is R_0(i) R_0(j) / C where C is the sum of the R_0 in each box. The eigenvalues of A are all zero except for the big one λ, which is equal to the trace, which is

\mathbf{E} R_0^2 / \mathbf{E} R_0

or, if you like, mean(R_0) + variance(R_0)/mean(R_0); so if the average R_0 is held fixed, this gets bigger the more R_0 varies among the population.

And if you look back at that Wikipedia page about the giant component, you’ll see that this is the exact threshold they give for random graphs with specified degree distribution, citing a 2000 paper of Molloy and Reid. Or if you look at Lauren Meyers’s 2005 paper on epidemic spread in networks, you will find the same threshold for epidemic outbreak in section 2. (The math here is descended from work of Molloy-Reed and this much-cited paper of Newman, Strogatz, and Watts.) Are typical models of “random graphs with specified degree distribution” are built to have rank 1 in this sense? I think so — see e.g. this sentence in Newman-Strogatz-Watts: “Another quantity that will be important to us is the distribution of the degree of the vertices that we arrive at by following a randomly chosen edge. Such an edge arrives at a vertex with probability proportional to the degree of that vertex.”

At any rate, even in this rank 1 case, even for 2×2 matrices, it’s not clear to me how to express the size of the giant component except by saying it’s a nonzero solution of v = 1 - e^{Av}. Does the vector v have anything do do with the Perron-Frobenius eigenvector? Challenge for the readers: work this out!

I did try a bunch of randomly chosen 6×6 matrices and plot the overall size of the giant component against λ, and this is what I got:

The blue line shows the proportion of the vertices that get infected if the graph were homogeneous with parameter λ. Which makes me think that thinking of λ as a good proxy for R_0 is not a terrible idea; it seems like a summary statistic of A which is pretty informative about the giant component. (This graph suggests maybe that among graphs with a given λ, the homogeneous one actually has the biggest giant component? Worth checking.)

I should hasten to say that there’s a lot of interest in the superspreader phenomenon, where a very small (probability -> 0) set of vertices has very large (superlinear in n) number of contacts. Meyers works out a bunch of cases like this and I think they are not well modeled by what I’m talking about here. (Update: I wrote another post about this! Indeed, I think the tight connection shown in the chart between λ and size of giant component is not going to persist when there’s extreme heterogeneity of degree.)

A more technical note: the result of Bollobas et al is much more general; there’s no reason the vertices have to be drawn from finitely many towns of equal size; you can instead have the types of vertices drawn from whatever probability space M you like, and then have the probability of an edge between an vertex x and a vertex y be W(x,y) for some symmetric function on M^2; nowadays this is called the “graphon” point of view. Now the matrix is replaced by an operator on functions:

A_Wf(x) = \int_M f(y)W(x,y),

the probability g(x) that a vertex of type x is in the giant component is a solution of the integral equation

g = 1-e^{-Ag}

and a giant component exists just when the operator norm ||A_W||_2 is greater than 1. This is the kind of analysis you’d want to use if you wanted to really start taking geography into account. For instance, take the vertices to be random points in a disc and let W(x,y) be a decreasing function of |x-y|, modeling a network where infection likelihood is a decreasing function of distance. What is the norm of the operator A_W in this case? I’ll bet my harmonic analyst friends know, if any of them are reading this. But I do not.

Update: Now I know, because my harmonic analyst friend Brian Street told me it’s just the integral over W(x,y) over y, which is the same for all y (well, at least it is if we’re on the whole of R^d.) Call that number V. He gave a very nice Fourier-theoretic argument but now that I know the answer I’m gonna explain it in terms of the only part of math I actually understand, 2×2 matrices. Here’s how it goes. In this model, each vertex has the same expected number of neighbors, namely that integral V above. But to say every vertex has the same expected number of neighbors is to say that 1 is an eigenvector for A. If 1 were any eigenvector other than Perron-Frobenius, it would be orthogonal to Perron-Frobenius, which it can’t be because both have positive entries, so it is Perron-Frobenius, so λ = V.

In fact I had noticed this fact in one of the papers I looked at while writing this (that if the matrix had all row-sums the same, the long-term behavior didn’t depend on the matrix) but didn’t understand why until just this minute. So this is kind of cool — if the kind of heterogeneity the network exhibits doesn’t cause different kinds of vertices to have different mean degree, you can just pretend the network is homogeneous with whatever mean R_0 it has. This is a generalization of the fact that two towns with no contact which have the same R_0 can be treated as one town with the same R_0 and you don’t get anything wrong.

Tagged , , , ,

What I learned at the Joint Math Meetings

Another Joint Meetings in the books!  My first time in San Antonio, until last weekend the largest US city I’d never been to.  (Next up:  Jacksonville.)  A few highlights:

  • Ngoc Tran, a postdoc at Austin, talked about zeroes of random tropical polynomials.  She’s proved that a random univariate tropical polynomial of degree n has about c log n roots; this is the tropical version of an old theorem of Kac, which says that a random real polynomial of degree n has about c log n real roots.  She raised interesting further questions, like:  what does the zero locus of a random tropical polynomial in more variables look like?  I wonder:  does it look anything like the zero set of a random band-limited function on the sphere, as discussed by Sarnak and Wigman?  If you take a random tropical polynomial in two variables, its zero set partitions the plane into polygons, which gives you a graph by adjacency:  what kind of random graph is this?
  • Speaking of random graphs, have you heard the good news about L^p graphons?  I missed the “limits of discrete structures” special session which had tons of talks about this, but I ran into the always awesome Henry Cohn, who gave me the 15-minute version.  Here’s the basic idea.  Large dense graphs can be modeled by graphons; you take a symmetric function W from [0,1]^2 to [0,1], and then your procedure for generating a random graph goes like this. Sample n points x_1,…x_n uniformly from [0,1] — these are your vertices.  Now put an edge between x_i and x_j with probability W(x_i,x_j) = W(x_j,x_i).  So if W is constant with value p, you get your usual Erdös-Renyi graphs, but if W varies some, you can get variants of E-R, like the much-beloved stochastic blockmodel graphs, that have some variation of edge density.  But not too much!  These graphon graphs are always going to have almost all vertices with degree linear in n.  That’s not at all like the networks you encounter in real life, which are typically sparse (vertex degrees growing sublinearly in n, or even being constant on average) and typically highly variable in degree (e.g. degrees following a power law, not living in a band of constant multiplicative width.)  The new theory of L^p graphons is vastly more general.  I’ve only looked at this paper for a half hour but I feel like it’s the answer to a question that’s always bugged me; what are the right descriptors for the kinds of random graphs that actually occur in nature?  Very excited about this, will read it more, and will give a SILO seminar about it on February 4, for those around Madison.
  • Wait, I’ve got still one more thing about random graphs!  Russ Lyons gave a plenary about his work with Angel and Kechris about unique ergodicity of the action of the automorphism group of the random graph.  Wait, the random graph? I thought there were lots of random graphs!  Nope — when you try to define the Erdös-Renyi graph on countably many vertices, there’s a certain graph (called “the Rado graph”) to which your random graph is isomorphic with probability 1!  What’s more, this is true — and it’s the same graph — no matter what p is, as long as it’s not 0 or 1!  That’s very weird, but proving it’s true is actually pretty easy.  I leave it an exercise.
  • Rick Kenyon gave a beautiful talk about his work with Aaron Abrams about “rectangulations” — decompositions of a rectangle into area-1 subrectangles.  Suppose you have a weighted directed graph, representing a circuit diagram, where the weights on the edges are the conductances of the corresponding wires.  It turns out that if you fix the energy along each edge (say, to 1) and an acyclic orientation of the edges, there’s a unique choice of edge conductances such that there exists a Dirichlet solution (i.e. an energy-minimizing assignment of a voltage to each node) with the given energies.  These are the fibers of a rational map defined over Q, so this actually gives you an object over a (totally real) algebraic number field for each acyclic orientaton.  As Rick pointed out, this smells a little bit like dessins d’enfants!  (Though I don’t see any direct relation.)  Back to rectangulations:  it turns out there’s a gadget called the “Smith Diagram” which takes a solution to the Dirichlet problem on the graph  and turns it into a rectangulation, where each edge corresponds to a rectangle, the area of the rectangle is the energy contributed by the current along that edge, the aspect ratio of the rectangle is the conductance, the bottom and top faces of the rectangle correspond to the source and target nodes, the height of a face is the voltage at that node, and etc.  Very cool!  Even cooler when you see the pictures.  For a 40×40 grid, it looks like this:

 

Tagged , , , , ,
%d bloggers like this: