The stochastic sandpile

At last week’s AIM conference on chip-firing I learned about a cool stochastic process which seems very simple, but whose behavior is apparently very hard to prove anything about!

It goes like this.  You start with a graph (say a finite connected graph, for simplicity) and your states are functions from the vertices of the graph to the natural numbers, which you think of as specifying the height of a pile of chips (or sand grains, if you want to keep up that metaphor) sitting on top of that vertex.  At least one of the vertices is a sink, which means that any chip landing there disappears forever.

Now the nature of the process is that you never want to have more than one chip at a vertex.  So every time there are at least two chips at a vertex v, you can pop two of them off; each chip flips a coin and decides whether to topple left or right.  And you keep on doing this until there’s at most one chip at each spot.  (You might start with more chips than vertices, but the sink helps you slough off the surplus.)  This process was apparently first studied by Manna and is sometimes called the Manna sandpile.

More math below the fold!

The first thing you might ask is:  how do you decide which order to execute the topples in?  And the beauty is that it doesn’t matter; this is an example of an abelian process where the end result is the same no matter how you order the moves.  (See Lionel Levine and Jim Propp’s excellent short survey about the abelian sandpile for more on this notion.)

The second thing you might ask is:  if you drop chips into the system at random for a long time, letting the system relax to stability after each drop, what is the stationary distribution on the configurations you get?

Let’s make things simple and take the graph to be a long path with a sink at either end.  In this case, just about everything I’m going to say below (and more!) can be found in the paper “Steady State of Stochastic Sandpile Models,” by Tridib Sadhu and Deepak Dhar.

Statistical physicists believe there’s a critical density for the stochastic sandpile, around 0.94.  If you start with a stable configuration which is below the critical density, and drop a chip, you may see a few local topples right near where you perturbed the system, but the topples will probably die out quickly, and the configuration will end up looking the same far away from where you dropped the chip.

But if you drop a chip on a configuration with supercritical density, what probably happens is that all hell breaks loose, topples propagate all over the place, and you end up slopping some chips into the sink.

So the configuration should “want” to stay pretty near the critical density.

None of this is proved, of course.

It’s pretty fun to play around with this!  One thing I found when running experiments was the following.  If you look at the configurations that typically arise, they are indeed pretty dense; so you can switch figure and ground and think of a typical configuration as a set of “holes” of density about 6% on the line.  And it looks in practice like the holes repel each other!  Is it easy to explain why this happens?  Can one prove it, even if one can’t compute the critical density?

Here’s another thing I like about this problem.  Suppose the graph is a path of N+2 vertices, with the first and last being sinks.  Now you can think of the set of probability distributions on configurations as a simplex sitting inside the space V of real-valued functions on the 2^N configurations.  And dropping a chip at position i, then relaxing, is nothing other than an operation on i, which we call x_i.  And the different operations commute!  What’s more, they obey a lot of relations.  If you drop two chips on i, you know you can topple those two chips; there’s a 1/4 chance they both topple left, a 1/4 chance they both topple right, and a 1/2 chance they topple in opposite directions.  In other words,

x_i^2 = (1/4) x_{i-1}^2 + (1/2) x_{i-1} x_{i+1} + (1/4) x_{i+1}^2.

Or, more succinctly,

(x_i - \frac{1}{2}(x_{i-1} + x_{i+1}))(x_i +\frac{1}{2}(x_{i-1} + x_{i+1})) = 0.

Call the left-hand side f_i; then the operators sit in the ring

A = R[x_0,x_1, \ldots, x_N, x_{N+1}]/(f_1, \ldots, f_N, x_0 - 1, x_{N+1} - 1).

What does this ring look like?  You can make lots of homomorphisms to the real numbers:  you just need to find real numbers a_0,a_1, … a_N,a_{N+1} which, for all i in [1..N], satisfy either

  • a_{i-1}, a_i,a_{i+1} are in arithmetic progression;
  • a_{i-1}, -a_i,a_{i+1} are in arithmetic progression

and which have a_1 = a_{N+1} = 1.  It appears that when N is not 3 mod 4, all these homomorphisms g: A -> R are distinct, giving an isomorphism between A and R^(2^N).  For instance, when N = 2 there should be 4 homomorphisms, and here they are:  they send x_0,x_1,x_2,x_3 to:

  • [1,1,1,1]
  • [1,1/5,-3/5,1]
  • [1,3/5,-1/5,1]
  • [1,-1/3,-1/3,1]

Moreover, the action of A on the 2^N-dimensional vector space V makes it a free rank-1 A-module; in fact, it is canonically identified with A by sending the empty configuration to 1.  So we can think of a probability distribution on configurations as an element of A; namely,

\sum p(i_1, \ldots, i_r) x_{i_1} \ldots x_{i_r}

where the coefficient is the probability assigned to the given configuration.  The stationary distribution is then the one which is unchanged when you drop a random chip, i.e. the element b of A such that

(1/n) (x_1 + \ldots + x_n) b = b.

Write a for this average; so

(a-1)b = 0.

But here’s the thing; all of those ring homomorphisms g: A -> R are going to send the average of the x_i to something other than 1.  (This follows from the fact that|g(x_i)| is a convex function.)   And this means they send b to 0.  Of course, there’s one exception; the augmentation homomorphism from A to R which sends every x_i to 1 does send their average to 1, and it sends the stationary distribution to 1 as well.

So if you can describe these other homomorphisms explicitly, you can really pin down the stationary distribution!

Now the bad news is that this seems kind of complicated to me.  For instance, I do not see from this how to derive the (apparent) fact that the expected density of the stable distribution converges to the critical density (or converges at all!) as N gets large.

I will add just one amusing fact.  One way to think of these 2^N homomorphisms is as follows.  The conditions on the a_i tell you that [a_i,a_{i+1}] is obtained from [a_{i-1},a_i] by one of two linear transformations in SL_2(Z):

T = \left(\begin{array}{cc}0&1\\ -1&2\end{array}\right)

T' = \left(\begin{array}{cc}0&1\\ -1&-2\end{array}\right)

So when you go from [a_0,a_1] to [a_N,a_{N+1}], the 2^N paths you take are governed by the 2^N possible paths in the random walk on SL_2(Z) attached to the two unipotent matrices T,T’!  In particular, for each length-N word in those generators, you can write a_{N+1} as a linear function in a_1, and then the condition a_{N+1} = 1 determines a_1 (whence all the other a_i.)

So there is some connection with very rich objects in number theory.  Can one use spectral properties of this walk to say anything equidistributional about the 2^N homomorphisms?  That’s the kind of thing that could in principle be informative about the stationary distribution.

Tagged , , , , , , ,

One thought on “The stochastic sandpile

  1. […] Sadhu and Dhar say in the paper I linked in the last post that it remains open whether the sandpile ring A is isomorphic to R^{2^N} when N is not congruent to 3 mod 4.  I think there’s an easy argument that shows the answer is yes.  Let’s give it a go.  We will freely use the notation from this afternoon’s post. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: