Tag Archives: combinatorics

Difference sets missing a Hamming sphere

I tend to think of the Croot-Lev-Pach method (as used, for instance, in the cap set problem) as having to do with n-tensors, where n is bigger than 2.  But you can actually also use it in the case of 2-tensors, i.e. matrices, to say things that (as far as I can see) are not totally trivial.

Write m_d for the number of squarefree monomials in x_1, .. x_n of degree at most d; that is,

m_d = 1 + {n \choose 1} + {n \choose 2} + \ldots + {n \choose d}.

Claim:  Let P be a polynomial of degree d in F_2[x_1, .. x_n] such that P(0) = 1.  Write S for the set of nonzero vectors x such that P(x) = 1.  Let A be a subset of F_2^n such that no two elements of A have difference lying in S.  Then |A| < 2m_{d/2}.

Proof:  Write M for the A x A matrix whose (a,b) entry is P(a-b).  By the Croot-Lev-Pach lemma, this matrix has rank at most 2m_{d/2}.  By hypothesis on A, M is the identity matrix, so its rank is |A|.

Remark: I could have said “sum” instead of “difference” since we’re in F_2 but for larger finite fields you really want difference.

The most standard context in which you look for large subsets of F_2^n with restricted difference sets is that of error correcting codes, where you ask that no two distinct elements of A have difference with Hamming weight (that is, number of 1 entries) at most k.

It would be cool if the Croot-Lev-Pach lemma gave great new bounds on error-correcting codes, but I don’t think it’s to be.  You would need to find a polynomial P which vanishes on all nonzero vectors of weight larger than k, but which doesn’t vanish at 0. Moreover, you already know that the balls of size k/2 around the points of A are disjoint, which gives you the “volume bound”

|A| < 2^n / m_{k/2}.

I think that’ll be hard to beat.

If you just take a random polynomial P, the support of P will take up about half of F_2^n; so it’s not very surprising that a set whose difference misses that support has to be small!

Here’s something fun you can do, though.  Let s_i be the i-th symmetric function on x_1, … x_n.  Then

s_i(x) = {wt(x) \choose i}

where wt(x) denotes Hamming weight.  Recall also that the binomial coefficient

{k \choose 2^a}

is odd precisely when the a’th binary digit of k is 1.



is a polynomial of degree 2^b-1 which vanishes on x unless the last b digits of wt(x) are 0; that is, it vanishes unless wt(x) is a multiple of 2^b.  Thus we get:

Fact:  Let A be a subset of F_2^n such that the difference of two nonzero elements in A never has weight a multiple of 2^b.  Then

|A| \leq 2m_{2^{b-1} - 1}.

Note that this is pretty close to sharp!  Because if we take A to be the set of vectors of  weight at most 2^{b-1} – 1, then A clearly has the desired property, and already that’s half as big as the upper bound above.  (What’s more, you can throw in all the vectors of weight 2^{b-1} whose first coordinate is 1; no two of these sum to something of weight 2^b.  The Erdös-Ko-Rado theorem says you can do no better with those weight 2^{b-1} vectors.)

Is there an easier way to prove this?

When b=1, this just says that a set with no differences of even Hamming weight has size at most 2; that’s clear, because two vectors whose Hamming weight has the same parity differ by a vector of even weight.  Even for b=2 this isn’t totally obvious to me.  The result says that a subset of F_2^n with no differences of weight divisible by 4 has size at most 2+2n.  On the other hand, you can get 1+2n by taking 0, all weight-1 vectors, and all weight-2 vectors with first coordinate 1.  So what’s the real answer, is it 1+2n or 2+2n?

Write H(n,k) for the size of the largest subset of F_2^n having no two vectors differing by a vector of Hamming weight exactly k.  Then if 2^b is the largest power of 2 less than n, we have shown above that

m_{2^{b-1} - 1 } \leq H(n,2^b) \leq 2m_{2^{b-1} - 1}.

On the other hand, if k is odd, then H(n,k) = 2^{n-1}; we can just take A to be the set of all even-weight vectors!  So perhaps H(n,k) actually depends on k in some modestly interesting 2-adic way.

The sharpness argument above can be used to show that H(4m,2m) is as least

2(1 + 4m + {4m \choose 2} + \ldots + {4m \choose m-1} + {4m-1 \choose m-1}). (*)

I was talking to Nigel Boston about this — he did some computations which make it looks like H(4m,2m) is exactly equal to (*) for m=1,2,3.  Could that be true for general m?

(You could also ask about sets with no difference of weight a multiple of k; not sure which is the more interesting question…)

Update:  Gil Kalai points out to me that much of this is very close to and indeed in some parts a special case of the Frankl-Wilson theorem…  I will investigate further and report back!

Tagged , , ,

Sumsets and sumsets of subsets

Say that ten times fast!
Now that you’re done, here’s an interesting fact.  I have been turning over this argument of Croot-Lev-Pach and mine and Gijswijt’s for a couple of weeks now, trying to understand what it’s really doing that leads to control of subsets of F_q^n without arithmetic progressions.

It turns out that there’s a nice refinement of what we prove, which somehow feels like it’s using more of the full strength of the Croot-Lev-Pach lemma.  The critical input is an old result of Roy Meshulam on linear spaces of low-rank matrices.

So here’s a statement.  Write M(q,n) for the CLP/EG upper bound on subsets of F_q^n with no three-term AP.

Then Theorem:  every subset S of F_q^n contains a subset S’ of size at most M(q,n) such that S’+S = S+S.

(Exercise:   Show that this immediately implies the bound on subsets with no three-term AP.)

I find this result very funny, so much so that I didn’t believe it at first, but I think the proof is right..!  Well, see for yourself, here it is.

Two natural questions:  is the bound on S’ sharp?  And is there any analogue of this phenomenon for the integers?

Update:  Of course right after I post this I realize that maybe this can be said more simply, without the invocation of Meshulam’s result (though I really like that result!)  Namely:  it’s equivalent to say that if |S| > M(q,n), you can remove ONE element from S and get an S’ with S’+S = S+S.  Why is this so?  Well, suppose not.  Choose some s_1.  We know it can’t be removed, so there must be some s_1 + s’_1 which is not expressible as a sum in S+T any other way.  The same applies to s_2, s_3, and so on.  So you end up with a set U of “unique sums” s_i + s’_i.  Now you can apply the CLP/EG argument directly to this situation; let P be a polyomial vanishing off U, this makes the matrix P(s+t) on S have a single 1 in each row and each column, and this is just as good as diagonal from the point of view of the argument in EG, so you can conclude just as there that |S| <= M(q,n).  Does that make sense?  This is the same spirit in which the polynomial method is used by Blasiak-Church-Cohn-Grochow-Umans to control multicolored sum-free sets, and the multicolored sum-free set of size (2^(4/3))^n constructed by Alon, Shpilka, and Umans also gives a lower bound for the problem under discussion here.

I still like the one-step argument in the linked .pdf better!  But I have to concede that you can prove this fact without doing any fancy linear algebra.

Update to Update (Jun 9):  Actually, I’m not so sure this argument above actually proves the theorem in the linked note.  So maybe you do need to (get to!) use this Meshulam paper after all!  What do you guys think?

Update:  The bound is sharp, at least over F_2!  I just saw this paper of Robert Kleinberg, which constructs a multicolored sum-free set in F_2^n of size just under M(2,n)!  That is, he gives you subsets S and T, both of size just under M(2,n), such that S’+T union S+T’ can’t be all of S+T if S’ and T’ are smaller than (1/2)S and (1/2)T, if I worked this out right.

The construction, which is actually based on one from 2014 by Fu and Kleinberg, actually uses a large subset of a cyclic group Z/MZ, where M is about M(2,n), and turns this into a multicolored sum-free set in (F_2)^n of (about) the same size.  So the difference between the upper bound and the lower bound in the (F_2)^n case is now roughly the same as the difference between the (trivial) upper bound and the lower bound in the case of no-three-term-AP sets in the interval.  Naturally you start to wonder:  a) Does the Fu-Kleinberg construction really have to do with characteristic 2 or is it general?  (I haven’t read it yet.)  b) Can similar ideas be used to construct large 3-AP-free subsets of (F_q)^n?  (Surely this has already been tried?) c) Is there a way to marry Meshulam’s Fourier-analytic argument with the polynomial method to get upper bounds on order (1/n)M(q,n)?  I wouldn’t have thought this worthwhile until I saw this Kleinberg paper, which makes me think maybe it’s not impossible to imagine we’re getting closer to the actual truth.


Tagged , , , , , ,

Bounds for cap sets

Briefly:  it seems to me that the idea of the Croot-Lev-Pach paper I posted about yesterday can indeed be used to give a new bound on the size of subsets of F_3^n with no three-term arithmetic progression! Such a set has size at most (2.756)^n. (There’s actually a closed form for the constant, I think, but I haven’t written it down yet.)

Here’s the preprint. It’s very short. I’ll post this to the arXiv in a day or two, assuming I (or you) don’t find anything wrong with it, so comment if you have comments! Note: I’ve removed the link, since the official version of this result is now the joint paper by me and Gijswijt, and the old version shouldn’t be cited.

Update:  Busy few days of administrative stuff and travel, sorry for not having updated the preprint yet, will try to finish it today.  One note, already observed below in the comments:  you get a similar bound for subsets of (F_q)^n free of solutions to (ax+by+cz=0) for any (a,b,c) with a+b+c=0; the cap set case is q=3, (a,b,c) = (1,1,1).

Update 2:  Dion Gijswijt and I will be submitting this result as a joint paper, which will amalgamate the presentations of our essentially identical arguments.  Dion carried out his work independently of mine at around the same time, and the idea should be credited to both of us.  Our joint paper is available on the arXiv.


Tagged , , , ,

Croot-Lev-Pach on AP-free sets in (Z/4Z)^n

As you know I love the affine cap problem:  how big can a subset of (Z/3Z)^n be that contains no three elements summing to 0 — or, in other words, that contains no 3-term arithmetic progression?  The best upper bounds, due to Bateman and Katz, are on order 3^n / n^(1+epsilon).  And I think it’s fair to say that all progress on this problem, since Meshulam’s initial results, have come from Fourier-analytic arguments.

So I’m charmed by this paper of Ernie Croot, Vsevolod Lev, and Peter Pach which proves a much stronger result for A = (Z/4Z)^n:  a subset with no 3-term arithmetic progression has size at most c^n for c strictly less than 4.  Better still (for an algebraic geometer) the argument has no harmonic analysis at all, but proceeds via the polynomial method!

This is surprising for two reasons.  First, it’s hard to make the polynomial method work well for rings, like Z/4Z, that aren’t fields; extending our knowledge about additive combinatorics to such settings is a long-standing interest of mine.  Second, the polynomial method over finite fields usually works in the “fixed dimension large field” regime; problems like affine cap, where the base ring is fixed and the dimension are growing, have so far been mostly untouched.

As for the first issue, here’s the deal.  This looks like a problem over Z/4Z but is really a problem over F_2, because the condition for being a 3-term AP

a – 2b + c = 0

has a 2 in it.  In other words:  the two outer terms have to lie in the same coset of 2A, and the middle term is only determined up to 2A.

 So CLP recast the problem as follows.  Let S be a large subset of A with no 3-term AP.   Let V be 2A, which is an n-dimensional vector space over F_2.  For each v in V, there’s a coset of V consisting of the solutions to 2a = v, and we can let S_v be the intersection of S with this coset.

We want to make this a problem about V, not about A.  So write T_v for a translate of S_v by some element of the coset, so T_v now sits in V.  Which element?  Doesn’t matter!

We can now write the “no 3-term AP” condition strictly in terms of these subsets of V.  Write (T_v – T_v)^* for the set of differences between distinct elements of T_v.  Write U for the set of v in V such that T_v is nonempty.  Then the union over all v in U of

(T_v – T_v)^* + v

is disjoint from U.

I leave it as an exercise to check the equivalence.

Now we have a combinatorial question about vector spaces over F_2; we want to show that, under the condition above, the sum of |T_v| over all v in U can’t be too large.

This is where the polynomial method comes in!  CLP show that (over any field, not just F_2), a polynomial of low degree vanishing on (T_v – T_v)^* has to vanish at 0 as well; this is Lemma 1 in their paper.  So write down a polynomial P vanishing on V – U; by dimension considerations we can choose one which doesn’t vanish on all of V.  (This uses the fact that the squarefree monomials of degree up to d are linearly independent functions on F_2^n.)  If U is big, we can choose P to have lowish degree.

Since P vanishes on V-U, P has to vanish on (T_v – T_v)^* + v for all v.  Since P has low degree, it has to vanish on v too, for all v.  But then P vanishes everywhere, contrary to our assumption.

The magic of the paper is in Lemma 1, in my view, which is where you really see the polynomial method applied in this unusual fixed-field-large-dimension regime.  Let me say a vague word about how it works.  (The actual proof is less than a page, by the way, so I’m not hiding much!)  Let P be your polynomial and d its degee.  You send your vector space into a subvariety of a much larger vector space W via degree-d Veronese embedding F_d. In fact you do this twice, writing

V x V -> W x W.

Now if P is your polynomial of degree-d, you can think of P(v_1 – v_2) as a bilinear form <,> on W x W.  Suppose S is a subset of V such that P(s_1 – s_2) vanishes for all distinct s_1, s_2 in S.   That means

<F_d(s_1), F_d(s_2)> = 0

for all distinct s_1, s_2 in S.  On the other hand,

<F_d(s_1), F_d(s_1)>

doesn’t depend on s_1; it just takes the value P(0).  So if P(0) is not equal to 0, you have |S| vectors of nonzero norm which are mutually orthogonal under this bilinear form, and so there can be at most dim W of these, and that’s the bound on |S| you need.

This is very slick and I hope the idea is more generally applicable!




Tagged , , , ,

New bounds on curve tangencies and orthogonalities (with Solymosi and Zahl)

New paper up on the arXiv, with Jozsef Solymosi and Josh Zahl.  Suppose you have n plane curves of bounded degree.  There ought to be about n^2 intersections between them.  But there are intersections and there are intersections!  Generically, an intersection between two curves is a node.  But maybe the curves are mutually tangent at a point — that’s a more intense kind of singularity called a tacnode.  You might think, well, OK, a tacnode is just some singularity of bounded multiplicity, so maybe there could still be a constant multiple of n^2 mutual tangencies.

No!  In fact, we show there are O(n^{3/2}).  (Megyesi and Szabo had previously given an upper bound of the form n^{2-delta} in the case where the curves are all conics.)

Is n^{3/2} best possible?  Good question.  The best known lower bound is given by a configuration of n circles with about n^{4/3} mutual tangencies.

Here’s the main idea.  If a curve C starts life in A^2, you can lift it to a curve C’ in A^3 by sending each point (x,y) to (x,y,z) where z is the slope of C at (x,y); of course, if multiple branches of the curve go through (x,y), you are going to have multiple points in C’ over (x,y).  So C’ is isomorphic to C at the smooth points of C, but something’s happening at the singularities of C; basically, you’ve blown up!  And when you blow up a tacnode, you get a regular node — the two branches of C through (x,y) have the same slope there, so they remain in contact even in C’.

Now you have a bunch of bounded degree curves in A^3 which have an unexpectedly large amount of intersection; at this point you’re right in the mainstream of incidence geometry, where incidences between points and curves in 3-space are exactly the kind of thing people are now pretty good at bounding.  And bound them we do.

Interesting to let one’s mind wander over this stuff.  Say you have n curves of bounded degree.  So yes, there are roughly n^2 intersection points — generically, these will be distinct nodes, but you can ask how non-generic can the intersection be?  You have a partition of const*n^2 coming from the multiplicity of intersection points, and you can ask what that partition is allowed to look like.  For instance, how much of the “mass” can come  from points where the multiplicity of intersection is at least r?  Things like that.


Tagged , , , , ,

Counting acyclic orientations with topology

Still thinking about chromatic polynomials.   Recall: if Γ is a graph, the chromatic polynomial χ_Γ(n) is the number of ways to color the vertices of Γ in which no two adjacent vertices have the same color.

Fact:  χ_Γ(-1) is the number of acyclic orientations of Γ.

This is a theorem of Richard Stanley from 1973.

Here’s a sketch of a weird proof of that fact, which I think can be made into an actual weird proof.  Let U be the hyperplane complement

\mathbf{A}^|\Gamma| - \bigcup_{ij \in e(\Gamma)} (z_i = z_j)

Note that |U(F_q)| is just the number of colorings of Γ by elements of F_q; that is,  χ_Γ(q).  More importantly, the Poincare polynomial of the manifold U(C) is (up to powers of -1 and t) χ_Γ(-1/t).  The reason |U(F_q)| is  χ_Γ(q) is that Frobenius acts on H^i(U) by q^{-i}.  (OK, I switched to etale cohomology but for hyperplane complements everything’s fine.)  So what should  χ_Γ(-1) mean?  Well, the Lefschetz trace formula suggests you look for an operator on U(C) which acts as -1 on the H^1, whence as (-1)^i on the H^i.  Hey, I can think of one — complex conjugation!  Call that c.

Then Lefchetz says χ_Γ(-1) should be the number of fixed points of c, perhaps counted with some index.  But careful — the fixed point locus of c isn’t a bunch of isolated points, as it would be for a generic diffeo; it’s U(R), which has positive dimension!  But that’s OK; in cases like this we can just replace cardinality with Euler characteristic.  (This is the part that’s folkloric and sketchy.)  So

χ(U(R)) = χ_Γ(-1)

at least up to sign.  But U(R) is just a real hyperplane complement, which means all its components are contractible, so the Euler characteristic is just the number of components.  What’s more:  if (x_1, … x_|Γ|) is a point of U(R), then x_i – x_j is nonzero for every edge ij; that means that the sign of x_i – x_j is constant on every component of U(R).  That sign is equivalent to an orientation of the edge!  And this orientation is obviously acyclic.  Furthermore, every acyclic orientation can evidently be realized by a point of U(R).

To sum up:  acyclic orientations are in bijection with the connected components of U(R), which by Lefschetz are χ_Γ(-1) in number.




Tagged , , , , ,

How many rational distances can there be between N points in the plane?

Terry has a nice post up bout the Erdös-Ulam problem, which was unfamiliar to me.  Here’s the problem:

Let S be a subset of R^2 such that the distance between any two points in S is a rational number.  Can we conclude that S is not topologically dense?

S doesn’t have to be finite; one could have S be the set of rational points on a line, for instance.  But this appears to be almost the only screwy case.  One can ask, more ambitiously:

Is it the case that there exists a curve X of degree <= 2 containing all but 4 points of S?

Terry explains in his post how to show something like this conditional on the Bombieri-Lang conjecture.  The idea:  lay down 4 points in general position.  Then the condition that the 5th point has rational distances from x1,x2,x3, and x4 means that point lifts to a rational point on a certain (Z/2Z)^4-cover Y of P^2 depending on x1,x2,x3,x4.  (It’s the one obtained by adjoining the 4 distances, each of which is a square root of a rational function.)

With some work you can show Y has general type, so under Lang its rational points are supported on a union of curves.  Then you use a result of Solymosi and de Zeeuw to show that each curve can only have finitely many points of S if it’s not a line or a circle.  (Same argument, except that instead of covers of P^2 you have covers of the curve, whose genus goes up and then you use Faltings.)

It already seems hard to turn this approach into a proof.  There are few algebraic surfaces for which we can prove Lang’s conjecture.  But why let that stop us from asking further questions?

Question:  Let S be a set of N points on R^2 such that no M are contained in any line or circle.  What is the maximal number of rational distance among the ~N^2 distances between points of S?

The Erdos-Ulam problem suggests the answer is smaller than N^2.  But surely it’s much smaller, right?  You can get at least NM rational distances just by having S be (N/M) lines, each with M rational points.  Can you do better?


Tagged , , , , ,

The existence of designs

The big news in combinatorics is this new preprint by Peter Keevash, which proves the existence of Steiner systems, or more generally combinatorial designs, for essentially every system of parameters where the existence of such a design isn’t ruled out on divisibility grounds.  Remarkable!

I’m not going to say anything about this paper except to point out that it has even more in it than is contained in the top-billed theorem; the paper rests on the probabilistic method, which in this case means, more or less, that Keevash shows that you can choose a “partial combinatorial design” in an essentially random way, and with very high probability it will still be “close enough” that by very careful modifications (or, as Keevash says, “various applications of the nibble” — I love the names combinatorists give their techniques) you can get all the way to the desired combinatorial design.

This kind of argument is very robust!  For instance, Keevash gets the following result, which in a way I find just as handsome as the result on designs.  Take a random graph on n vertices — that is, each edge is present with probability 1/2, all edges independent.  Does that graph have a decomposition into disjoint triangles?  Well, probably not, right?  Because a union of triangles has to have even degree at each vertex, while the random graph is going to have n/2 of its vertices with odd degree. (This is the kind of divisibility obstruction I mentioned in the first paragraph.)  In fact, this divisibility argument shows that if the graph can be decomposed as a union of triangles with M extra edges, M has to be at least n/4 with high probability, since that’s how many edges you would need just to dispose of the odd-degree vertices.  And what Keevash’s theorem shows is there really is (with high probability) a union of disjoint triangles that leaves only (1+o(1))(n/4) edges of the random graph uncovered!

More details elsewhere from Vuhavan and Gil Kalai.

Tagged , ,

Random simplicial complexes

This is a post about Matt Kahle’s cool paper “Sharp vanishing thresholds for cohomology of random flag complexes,” which has just been accepted in the Annals.

The simplest way to make a random graph is to start with n vertices and then, for each pair (i,j) independently, put an edge between vertices i and j with probability p.  That’s called the Erdös-Rényi graph G(n,p), after the two people who first really dug into its properties.  What’s famously true about Erdös-Rényi graphs is that there’s a sharp threshold for connectness.  Imagine n being some fixed large number and p varying from 0 to 1 along a slider.  When p is very small relative to n, G(n,p) is very likely to be disconnected; in fact, if

p = (0.9999) \frac{\log n}{n}

there is very likely to be an isolated vertex, which makes G(n,p) disconnected all by itself.

On the other hand, if

p = (1.0001) \frac{\log n}{n}

then G(n,p) is almost surely connected!  In other words, the probability of connectedness “snaps” from 0 to 1 as you cross the barrier p = (log n)/n.  Of course, there are lots of other interesting questions you can ask — what exactly happens very near the “phase transition”?  For p < (log n)/n, what do the components look like?  (Answer:  for some range of p there is, with probability 1, a single “giant component” much larger than all others.  For instance, when p = 1/n the giant component has size around n^{2/3}.)

I think it’s safe to say that the Erdös-Rényi graph is the single most-studied object in probabilistic combinatorics.

But Kahle asked a very interesting question about it that was completely new to me.  Namely:  what if you consider the flag complex X(n,p), a simplicial complex whose k-simplices are precisely the k-cliques in G(n,p)?  X(n,p) is connected precisely when G(n,p) is, so there’s nothing new to say from that point of view.  But, unlike the graph, the complex has lots of interesting higher homology groups!  The connectedness threshold says that dim H_0(X(n,p)) is 1 above some sharp threshold and larger below it.  What Kahle proves is that a similar threshold exists for all the homology.  Namely, for each k there’s a range (bounded approximately by n^{1/k} and $(log n / n)^{1/(k+1)}$) such that H_k(X(n,p)) vanishes when p is outside the range, but not when p is inside the range!  So there are two phase transitions; first, H^k appears, then it disappears.  (If I understand correctly, there’s a narrow window where two consecutive Betti numbers are nonzero, but most of the time there’s only one nonzero Betti number.)  Here’s a graph showing the appearance and disappearance of Betti in different ranges of p:

This kind of “higher Erdös-Rényi theorem” is, to me, quite dramatic and unexpected.  (One consequence that I like a lot; if you condition on the complex having dimension d, i.e. d being the size of the largest clique in G(n,p), then with probability 1 the homology of the complex is supported in middle degree, just as you might want!)  And there’s other stuff there too — like a threshold for the fundamental group of X(n,p) to have property T.

For yet more about this area, see Kahle’s recent survey on the topology of random simplicial complexes.  The probability that a random graph has a spectral gap, the distribution of Betti numbers of X(n,p) in the regime where they’re nonzero, the behavior of torsion, etc., etc……

Tagged , , , ,

“Kakeya sets over non-archimedean local rings,” by Dummit and Hablicsek

A new paper posted this week on the arXiv this week by UW grad students Evan Dummit and Márton Hablicsek answers a question left open in a paper of mine with Richard Oberlin and Terry Tao.  Let me explain why I was interested in this question and why I like Evan and Marci’s answer so much!

Recall:  a Kakeya set in an n-dimensional vector space over a field k is a set containing a line (or, in the case k = R, a unit line segment) in every direction.  The “Kakeya problem,” phrased loosely, is to prove that Kakeya sets cannot be too small.

But what does “small” mean?  You might want it to mean “measure 0” but for the small but important fact that in this interpretation the problem has a negative answer:  as Besicovitch discovered in 1919, there are Kakeya sets in R^2 with measure 0!  So Kakeya’s conjecture concerns a stronger notion of “small”  — he conjectures that a Kakeya set in R^n cannot have Hausdorff or Minkowski dimension strictly smaller than n.

(At this point, if you haven’t thought about the Kakeya conjecture before, you might want to read Terry’s long expository post about the Kakeya conjecture and Dvir’s theorem; I cannot do it any better here.)

The big recent news in this area, of course, is Dvir’s theorem that that the Kakeya conjecture is true when k is a finite field.

Of course one hopes that Dvir’s argument will give some ideas for an attack on the original problem in R^n.  And that hasn’t happened yet; though the “polynomial method,” as the main idea of Dvir’s theorem is now called, has found lots of applications to other problems in real combinatorial geometry (e.g. Guth and Katz’s proof of the joints conjecture.)

Why not Kakeya?  Well, here’s one clue.  Dvir actually proves more than the Kakeya conjecture!  He proves that a Kakeya set in F_q^n has positive measure.

(Note:  F_q^n is a finite set, so of course any nonempty subset has positive measure; so “positive measure” here is shorthand for “there’s a lower bound for the measure which is bounded away from 0 as q grows with n fixed.”)

What this tells you is that R really is different from F_q with respect to this problem; if Dvir’s proof “worked” over R, it would prove that a Kakeya set in R^n had positive measure, which is false.

So what’s the difference between R and F_q?  In my view, it’s that R has multiple scales, while F_q only has one.  Two elements in F_q are either the same or distinct, but there is nothing else going on metrically, while distinct real lines can be very close together or very far apart.  The interaction between distances at different scales is your constant companion when working on these problems in the real setting; so maybe it’s not so shocking that a one-scale field like F_q is not a perfect model for the phenomena we’re trying to study.

Which leads us to the ring F_q[[t]] — the “non-archimedean local ring” which Dummit and Hablicsek write about.  This ring is somehow “in between” finite fields and real numbers.  On the one hand, it is “profinite,” which is to say it is approximated by a sequence of larger and larger finite rings F_q[[t]]/t^k.  On the other hand, it has infinitely many scales, like R.  From the point of view of Kakeya sets, is it more like a finite field, or more like the real numbers?  In particular, does it have Kakeya sets of measure 0, making it potentially a good model for the real Kakeya problem?

This is the question Richard, Terry, and I asked, and Evan and Marci show that the answer is yes; they construct explicitly a Kakeya set in F_q[[t]]^2 with measure 0.

Now when we asked this question in our paper, I thought maybe you could do this by imitating Besicovitch’s argument in a straightforward way.  I did not succeed in doing this.  Evan and Marci tried too, and they told me that this just plain doesn’t work.  The construction they came up with is (at least as far as I can see) completely different from anything that makes sense over R.  And the way they prove measure 0 is extremely charming; they define a Markov process such for which the complement of their Kakeya set is the set of points that eventually hit 0, and then show by standard methods that their Markov process goes to 0 with probability 1!

Of course you ask:  does their Kakeya set have Minkowski dimension 2?  Yep — and indeed, they prove that any Kakeya set in F_q[[t]]^2 has Minkowski dimension 2, thus proving the Kakeya conjecture in this setting, up to the distinction between Hausdorff and Minkowski dimension.  (Experts should feel free to weigh in an tell me how much we should worry about this distinction.)  Note that dimension 2 is special:  the Kakeya conjecture in R^2 is known as well.  For every n > 2 we’re in the dark, over F_q[[t]] as well as over R.

To sum up:  what Dummit and Hablicsek prove makes me feel like the Kakeya problem over  F_q[[t]] is, at least potentially, a pretty good model for the Kakeya problem over R!  Not that we know how to solve the Kakeya problem over F_q[[t]]…..

Tagged , , , , , , , , , ,
%d bloggers like this: