Still thinking about chromatic polynomials. Recall: if Γ is a graph, the chromatic polynomial χ_Γ(n) is the number of ways to color the vertices of Γ in which no two adjacent vertices have the same color.

**Fact:** χ_Γ(-1) is the number of acyclic orientations of Γ.

This is a theorem of Richard Stanley from 1973.

Here’s a sketch of a weird proof of that fact, which I think can be made into an actual weird proof. Let U be the hyperplane complement

Note that |U(F_q)| is just the number of colorings of Γ by elements of F_q; that is, χ_Γ(q). More importantly, the Poincare polynomial of the manifold U(C) is (up to powers of -1 and t) χ_Γ(-1/t). The reason |U(F_q)| is χ_Γ(q) is that Frobenius acts on H^i(U) by q^{-i}. (OK, I switched to etale cohomology but for hyperplane complements everything’s fine.) So what should χ_Γ(-1) mean? Well, the Lefschetz trace formula suggests you look for an operator on U(C) which acts as -1 on the H^1, whence as (-1)^i on the H^i. Hey, I can think of one — complex conjugation! Call that c.

Then Lefchetz says χ_Γ(-1) should be the number of fixed points of c, perhaps counted with some index. But careful — the fixed point locus of c isn’t a bunch of isolated points, as it would be for a generic diffeo; it’s U(R), which has positive dimension! But that’s OK; in cases like this we can just replace cardinality with Euler characteristic. (This is the part that’s folkloric and sketchy.) So

χ(U(R)) = χ_Γ(-1)

at least up to sign. But U(R) is just a real hyperplane complement, which means all its components are contractible, so the Euler characteristic is just the number of components. What’s more: if (x_1, … x_|Γ|) is a point of U(R), then x_i – x_j is nonzero for every edge ij; that means that the sign of x_i – x_j is constant on every component of U(R). That sign is equivalent to an orientation of the edge! And this orientation is obviously acyclic. Furthermore, every acyclic orientation can evidently be realized by a point of U(R).

To sum up: acyclic orientations are in bijection with the connected components of U(R), which by Lefschetz are χ_Γ(-1) in number.

### Like this:

Like Loading...

*Related*

Your posts look like the content of this paper:

http://arxiv.org/abs/1601.00254v1 (Section 4)

though maybe that’s what you’ve been reading recently…

Your posts look like the contents of this paper:

http://arxiv.org/abs/1601.00254v1 (Section 4)

though maybe that’s you’ve been reading lately.

Indeed! No, total coincidence, I hadn’t seen that.

It seems you have re-discovered a result of Las Vergnas and Zaslavsky. A few references that come to mind are:

– Oriented matroids book by Bjorner, Las Vergnas, Sturmfels, White, Ziegler (Ch.4 I think)

– Zaslavsky – Facing up to arrangements

– Greene, Zaslavsky – Whitney numbers, arrangements of hyperplanes, zonotopes, non-Radon partitions

Section 7 of the last paper explicitly discusses Stanley’s theorem for graphs. But the theory is now more general and applicable to oriented matroids etc.

Aha — this is in Zaslavsky’s thesis from 1974! And of course it’s even easier than I make it seem here; no Lefschetz involved, you just use the fact that Euler characteristic is additive, so you can compute chi(U(R)) by inclusion-exclusion and it’s completely formal that you get chi_Gamma(-1).

Why is this folkloric or sketchy? Even if the fixed set of a self-map is positive-dimensional, the Lefschetz number should still be the Euler characteristic of the fixed set as long as you have “index 1” in the transverse direction — which should be guaranteed if it is locally modeled by complex conjugation on C^n. Is it just that this isn’t written down anywhere? (I’d suspect Bott-Tu might have it, they are great about proving various cohomological facts by explicitly writing down forms, e.g. the fact that wedge product is Poincaré dual to transverse intersection.)

“Sketchy” just means “I didn’t bother to check the index issues because I would never write any blog posts about math if I bothered to look stuff up.”

I believe that the fancy approach that you sketch is fully explained in this paper of Kisin and Lehrer:

The theorem of Stanley relating proper colorings and acyclic orientations and the theorem of Zaslavsky relating point-counts and regions of space are examples of reciprocity theorems. There is an upcoming book by Beck and Sanyal on this topic:

http://math.sfsu.edu/beck/crt.html

It’s curious that some combinatorial reciprocity theorems have distinct “motivic” origins: trace formulas in the case of Stanely/Zaslovsky reciprocity and Poincare duality in the case of Ehrhart reciprocity. Are there any other fancy algebraic geometry theorems that lead to reciprocity theorems?

Given what Kisin and Lehrer write down carefully, the argument in the post is certainly a proof now!

Why are theorems of this kind called “reciprocity theorems”?

It’s not exactly reciprocity, but Adiprasito-Huh-Katz used hard Lefschetz and the Hodge-Riemann relations (I think that qualifies as fancy algebraic geometry theorems) to prove log-concavity for the coefficients of characteristic polynomials of matroids — this includes the coefficients of chromatic polynomials. Check out:

http://arxiv.org/abs/1511.02888

To JSE: I think reciprocity is Stanley’s name for the phenomenon. It usually involves looking at a polynomial whose positive values count one combinatorial object and whose negative values count quite another combinatorial object.

I suppose that both Stanley reciprocity and Ehrhart reciprocity involve inclusion/exclusion. There are also some duality results involving McMullen’s polytope algebra that have a similar flavor. Some of these results came out of Brion’s work on equivariant cohomology of toric varieties. So that might be another source for reciprocity.

To Tom Church: I’m not sure if there’s any reciprocity hiding in AHK as we use algebras modelling the cohomology of a closed algebraic variety (the projectivization of a vector space blown up along the flats of a hyperplane arrangement). And we’re interested in the coefficients of the characteristic polynomial not its values. The analogous object modelling the cohomology of the hyperplane arrangement complement is the Orlik-Solomon algebra. Not to say that I don’t think AHK is fancy.

You might enjoy reading more about this e.g. in Lecture 2 (and the other lectures) in Richard Stanley’s chapter “An Introduction to Hyperplane Arrangements” in the PCMI volume on Geometric Combinatorics. (This volume has several excellent articles.) The hyperplane arrangement you discuss above is usually called a “graphic hyperplane arrangement”, and one may show that the chromatic polynomial of a graph G equals the “characteristic polynomial” of the associated graphic hyperplane arrangement A_G by using that the chromatic polynomial evaluated at q counts proper q-colorings of G, which are in bijection with the points in the complement of the graphic hyperplane arrangement when working over the finite field F_q. This sounds pretty similar to your perspective above, though it is etale-cohomology-free.

Somehow it feels more like a “functional equation” than a “reciprocity law”!

To Eric Katz: sorry, I hadn’t seen the name of the person I was responding to — AHK probably isn’t news to you. ;)

Churchsplaining!

To Tom and JSE: :)

[…] worked this out via the same kind of Lefschetz computation as in the previous post, but once you get the answer, you can actually derive this as a corollary of Stanley’s […]

You might look at my paper “A combinatorial analysis of topological dissections” to see the valuation property of the Euler characteristic put to work to get general formulas on the lines of $\chi(-1)$. Any other valuation can be substituted for the Euler characteristic, hence for instance the number of lattice points, if your universal space has a finite number of them, e.g., if it’s an integral orthotope. It might also be of interest to see the paper of Richard Ehrenborg and Margaret A. Readdy, “On Valuations, the Characteristic Polynomial, and Complex Subspace Arrangements”, Advances in Mathematics 134, 32-42 (1998).