Sumsets and sumsets of subsets

Say that ten times fast!
Now that you’re done, here’s an interesting fact.  I have been turning over this argument of Croot-Lev-Pach and mine and Gijswijt’s for a couple of weeks now, trying to understand what it’s really doing that leads to control of subsets of F_q^n without arithmetic progressions.

It turns out that there’s a nice refinement of what we prove, which somehow feels like it’s using more of the full strength of the Croot-Lev-Pach lemma.  The critical input is an old result of Roy Meshulam on linear spaces of low-rank matrices.

So here’s a statement.  Write M(q,n) for the CLP/EG upper bound on subsets of F_q^n with no three-term AP.

Then Theorem:  every subset S of F_q^n contains a subset S’ of size at most M(q,n) such that S’+S = S+S.

(Exercise:   Show that this immediately implies the bound on subsets with no three-term AP.)

I find this result very funny, so much so that I didn’t believe it at first, but I think the proof is right..!  Well, see for yourself, here it is.

Two natural questions:  is the bound on S’ sharp?  And is there any analogue of this phenomenon for the integers?

Update:  Of course right after I post this I realize that maybe this can be said more simply, without the invocation of Meshulam’s result (though I really like that result!)  Namely:  it’s equivalent to say that if |S| > M(q,n), you can remove ONE element from S and get an S’ with S’+S = S+S.  Why is this so?  Well, suppose not.  Choose some s_1.  We know it can’t be removed, so there must be some s_1 + s’_1 which is not expressible as a sum in S+T any other way.  The same applies to s_2, s_3, and so on.  So you end up with a set U of “unique sums” s_i + s’_i.  Now you can apply the CLP/EG argument directly to this situation; let P be a polyomial vanishing off U, this makes the matrix P(s+t) on S have a single 1 in each row and each column, and this is just as good as diagonal from the point of view of the argument in EG, so you can conclude just as there that |S| <= M(q,n).  Does that make sense?  This is the same spirit in which the polynomial method is used by Blasiak-Church-Cohn-Grochow-Umans to control multicolored sum-free sets, and the multicolored sum-free set of size (2^(4/3))^n constructed by Alon, Shpilka, and Umans also gives a lower bound for the problem under discussion here.

I still like the one-step argument in the linked .pdf better!  But I have to concede that you can prove this fact without doing any fancy linear algebra.

Update to Update (Jun 9):  Actually, I’m not so sure this argument above actually proves the theorem in the linked note.  So maybe you do need to (get to!) use this Meshulam paper after all!  What do you guys think?

Update:  The bound is sharp, at least over F_2!  I just saw this paper of Robert Kleinberg, which constructs a multicolored sum-free set in F_2^n of size just under M(2,n)!  That is, he gives you subsets S and T, both of size just under M(2,n), such that S’+T union S+T’ can’t be all of S+T if S’ and T’ are smaller than (1/2)S and (1/2)T, if I worked this out right.

The construction, which is actually based on one from 2014 by Fu and Kleinberg, actually uses a large subset of a cyclic group Z/MZ, where M is about M(2,n), and turns this into a multicolored sum-free set in (F_2)^n of (about) the same size.  So the difference between the upper bound and the lower bound in the (F_2)^n case is now roughly the same as the difference between the (trivial) upper bound and the lower bound in the case of no-three-term-AP sets in the interval.  Naturally you start to wonder:  a) Does the Fu-Kleinberg construction really have to do with characteristic 2 or is it general?  (I haven’t read it yet.)  b) Can similar ideas be used to construct large 3-AP-free subsets of (F_q)^n?  (Surely this has already been tried?) c) Is there a way to marry Meshulam’s Fourier-analytic argument with the polynomial method to get upper bounds on order (1/n)M(q,n)?  I wouldn’t have thought this worthwhile until I saw this Kleinberg paper, which makes me think maybe it’s not impossible to imagine we’re getting closer to the actual truth.


Tagged , , , , , ,

40 thoughts on “Sumsets and sumsets of subsets

  1. […] updates (May 31): New applications are mentioned in a new post on quomodocumque: sumsets and sumsets of subsets including a link to a paper  by Robert Kleinberg: A nearly tight upper bound on tri-colored […]

  2. Jordan, I’m glad you found my note on multi-colored sum-free sets in characteristic 2 interesting! Since you didn’t mention Coppersmith and Winograd in summarizing my work, I feel compelled to point out that the Fu-Kleinberg construction you mentioned draws heavily upon their construction of “uniquely solvable puzzles” (to use a term coined by Cohn, Szegedy, Umans, and me in a 2005 paper on matrix multiplication).

    After realizing that the Fu-Kleinberg construction matches your upper bound, I had the same initial reaction you expressed in your post: I ought to try matching the upper bound for 3-AP-free subsets of (F_q)^n for larger q, or at least for q=3. All I can say is that so far, I haven’t met with success. (The coincidence that M(2,n) is approximately n-choose-(n/3) is peculiar to the characteristic 2 case, and lends itself exceptionally well to the application of the Coppersmith-Winograd uniquely solvable puzzle construction.) But it doesn’t yet strike me as a total dead end, and I hope that you or someone else will see a way to apply (a suitable extension of) the Coppersmith-Winograd idea to find an improved lower bound on the size of 3-AP-free sets in (F_q)^n.

    By the way, I found your new result about the existence of a (not too large) S’ such that S’+S = S+S really lovely!

  3. David Speyer says:

    It seems to me that Kleinberg’s method should work for multi-colored sum free sets. (I have no idea how to remove the coloring.) I’ve marked the only step I think is significant handwaving. Please let me know if I am doing something done.

    Let H be the set of lattice points in [0,q-1]^n with coordinate sum exactly n/3 and let H’ be the set with coordinate sum 2n/3. We should have |H| = |H'| \approx M(q,n)/\sqrt{n} by a central limit theorem computation. We note that, for a and b in H and c in H’, we have a+b=c in (Z/q)^n if and only if a+b=c in Z^n.

    Let X be the set of triples (a,b,c) with a and b in F, c in G and a+b=c.

    Choose M around 4 |X|/|H| and choose a 3-AP subset B of Z/M with |B| = M^{1-o(1)}.

    As in Kleinberg’s paper, consider h_0, h_1 and h_2 random afffine linear maps from Z^n to Z/M converting sums into AP’s. Define Y, Y_0, Y_1, Y_2 and Z as before.

    The expected size of Y is |X| |B|/M^2 as before.

    The expected size of Y_0 is a little trickier. As before, we need to sum over (a,b,c,b’,c’) such that (a,b,c) and (a,b’,c’) are both in X. In algebraic geometer’s notation, we want X \times_H X. HANDWAVING STEP HERE: I would expect |X \times_H X| \approx |X|^2/H. In other words, I expect that, for a in H, the number of ways to extend a to a triple (a,b,c) in X didn’t depend very much on a. If that is right, the same argument as before shows that the thing to sum is still |B|/M^3. So our expectation is roughly |X|^2 |B|/(|H| M^3)

    We have constructed a tri-colored sum set Z of expected size at least

    X B/M^2 – 3 X^2 B/(H M^3) = (B/M) (X/M) (1-3 X/(H M)) = (B/M) (X/M) (1/4)

    where we used our choice M = 4 X/H in the second equality.

    Now, X/M = H/4, so this is (B/M) H/16. By our construction of B, this is M^{-o(1)} H. I’m sure I can show that X, H and M all grow exponentially in n, so this is also H^{1-o(1)} as desired.

  4. David Speyer says:

    “something done” should read “something dumb”. And, of course, the point is “should work for multi-colored sum free sets for any q”.

  5. David Speyer says:

    As numerical evidence concerning my assertion that |X \times_H X| \approx |X|^2/|H|, I computed the sizes for q=3 when n = 3m. For X and H I took 5 \leq m \leq 20 (so 15 \leq n \leq 60.) The best fit lines are \log |H| = -1.63281 + 2.14829 n and \log |X| = -3.25481 + 4.28498 n and the data looks beautifully linear. For X \times_H X, I couldn’t get as high but I tried 3 \leq m \leq 7 and the data still looks linear with best fit line \log |X \times_H X| = -3.71869 + 6.25248 n. The key point is that 6.25248 \approx 2(4.28498) - 2.14829.

    To be honest, I don’t know enough about this sort of approximation to know whether this is evidence for or against.

    Also, it seems that |X| is very close to |H|^2 — meaning that, if a and b are in H, then a+b is very likely to be in H'. I find this surprising, and at first thought this was a bug, but it seems not to be. For example, when n=15, I compute that the number of degree 5 reduced monomials is 9828 and, out of the 9828^2 = 96589584 products, a full 92135589 of them are still reduced.

  6. David Speyer says:

    Sorry, dumb error in the numerical data. Trying to fix now.

  7. JSE says:

    When I get a minute I’ll read through this more carefully! But, David, regarding your last paragraph, I’ll bet part of what’s going on is that monomials in H, which have pretty small degree, are biased to have very few variables raised to the 2? With q=2 I’ll bet |X| would be farther away from |H|^2, because you’re asking for the probability that two subsets of size n/3 are disjoint, which is pretty low.

  8. JSE says:

    Ha, that would be funny if I were earnestly trying to explain something that’s not in fact true. Wouldn’t be the first time.

  9. David Speyer says:

    Corrected numbers: \log |H| \approx -1.43 + 1.06 n, \log |X| \approx -2.71 + 1.96 n and |X \times_H X| \approx -2.94 + 2.73 n. If we are to win, we want 2.73 = 2(1.96) - 1.06 (or rather, that equation between the true three growth rates). I was hoping to get rates of growth for which the relation was either clearly true or clearly false, but no such luck.

  10. David Speyer says:

    Okay, last post for today I promise. I switched to using Cramer’s theorem instead of counting points as doing a best fit line. I get that the rates of growth for q=3 are

    |H| \approx (2.76)^n

    |X| \approx (7.31)^n

    |X \times_H X| \approx (19.45)^n

    and (7.31)^2/(2.76) = 19.40. The numbers 2.76, 7.31 and 19.45 come from minimizing (1+u+u^2)/u^{2/3}, ((1 + v + v^2) + u (1 + v + v^2) + u^2 (1 + v))/(u v)^{2/3} and (  (1 + v_1 + v_1^2) (1 + v_2 + v_2^2) +   u (1 + v_1 + v_1^2) (1 + v_2 + v_2^2) + u^2 (1 + v_1) (1 + v_2)   )/(u v_1 v_2)^{2/3} respectively.

    I ran the numerical optimizations with Mathematica’s AccuracyGoal set to 20, meaning that we are aiming for 20 digits of accuracy. (Obviously, I didn’t report them all.) If Mathematica is telling the truth, the difference between 19.45 and 19.40 is meaningful, which means we lose.

    On the other hand, we should still get pretty close this way! It seems like I get, a lower bound like |X|^2/|X \times_H X| \approx (2.748)^n, which beats Edel’s construction of 2.2^n. Or else I’ve made another dumb error…

  11. David, that was a great idea. It does produce large multi-colored sum-free sets, but I don’t think it matches the $M(q,n)$ bound. The issue is that the size of $X$ is easy to estimate, and (per Jordan’s comment above) it is not going to be equal to $|H|^2$.

    To bound the size of $X$ from above, let’s drop the requirement that the coordinate sum of $a$ and $b$ is exactly $n/3$ and the coordinate sum of $c$ is $2n/3$. Instead let $Y$ be the set of triples $a,b,c$ of elements of $[0,q-1]^n$ such that $a+b=c$ over the integers. (In other words, when you add $a$ and $b$ in $F_q^n$, the sum is equal to $c$ “without wraparound”.) Clearly, the size of $Y$ is ${q+1 \choose 2}^n$ since there are exactly ${q+1 \choose 2}$ triples $x,y,z$ in $[0,q-1]$ such that $x+y=z$. The size of $X$ is not much smaller, i.e. $|X| = |Y|^{1-o(1)}$. Here’s why. When we sample a random element $(a,b,c)$ of $Y$, the distribution of “digits” in $a$ and $b$ is easy to work out. The digit $i$ occurs with frequency $i/{q+1 \choose 2}$, because the triple of “first digits” (a_1,b_1,c_1) is a uniformly random sample from the set of ordered triples $x,y,z$ in $[0,q-1]$ such that $x+y=z$, and all the other triples of digits are i.i.d. draws from the same distribution.

    Now what about the size of $H$, and the distribution of digits in a typical element of $H$? Working out the distribution of digits is an entropy maximization problem, i.e., maximize the entropy of a distribution over $[0,q-1]$ subject to the constraint that the expected value of a random sample is $(q-1)/3$. I don’t know how to write the result in closed form, but for $q>2$ the optimum is typically (or maybe always) irrational; it certainly isn’t the distribution that assigns probability $i/{q+1 \choose 2}$ to the digit $i$.

    However, for $q=3$ the entropy of the sub-optimal distribution is very close to the entropy of the optimal one. If I’m calculating correctly, this gives a lower bound of $(2.749)^n$ on the size of tri-colored sum-free sets over $F_3$ as compared with Jordan’s upper bound of $(2.756)^n$.

  12. Ugh. I don’t know how to get pretty inline LaTeX as in David’s comments. Anyhow, David, as I was typing my comment it seems like you were reaching the same conclusion. (Except oddly, with 2.748 in place of 2.749. For the record, the way I got 2.749 was simply by calculating 2^{1/2}*3^{1/3}*6^{1/6} as dictated by the reasoning in my previous comment comparing the sets X and Y.)

  13. David Speyer says:

    I suspect your 2.749 above is the same as my 2.748, so I’m going to take it as confirmation. I didn’t need X \approx H^2 to win, but I did need |X \times_H X| \approx |X|^2/|H|. (To be clear, |U \times_W V|, where U and V are sets with a map to W is the set of pairs (u,v) with the same image in W, and the precise meaning of \approx is exponential growth to the same base.)

    Still, 2.748 is a lot better than 2.2 , so we are already beating the old record. And maybe if we think about it a bit more, we’ll see how to tweak the construction further.

  14. David Speyer says:

    Here is how to get LaTeX in a wordpress blog. An undocumented feature which I sometimes take advantage of is that it also works in comments.

    I’m not sure what to say about 2.748 versus 2.749, except that your method is simpler so it is more likely to be correct.

  15. Finally, here’s another application of the same idea that yields a non-trivial lower bound for 3AP-free sets of $(Z/qZ)^n$ for $q>4$. I don’t know if this is better than what was previously known, but a comment I saw on Gil Kalai’s blog (or maybe written by Gil on someone else’s blog) leads me to think it might be.

    Let $X$ be the set of all $n$-tuples in $[0,(q+1)/2]^n$. Choose a prime $p > 4*[(q+1)/4]^n$, and let $B$ be a set of $p^{1-o(1)}$ elements in $F_p$ with no three distinct elements in arithmetic progression. Now for any $w \in F_p^n$ let $Y(w)$ be the subset of $X$ consisting of all $n$-tuples $x \in X$ such that $w \cdot x$ belongs to $B$. Finally, construct a set $Z(w)$ by repeatedly deleting triples of distinct elements of $Y(w)$ that form a three-term arithmetic progression, until no such triples remain. (Call this the “pruning step”.)

    Clearly, the set $Z(w)$ thus defined is free of three-term arithmetic progressions. For a random $w \in F_p^n$, what is the expected size of $Z(w)$? We can put a lower bound on this quantity by the following reasoning.

    1. Every nonzero $x \in X$ has probability $|B|/p$ of belonging to $Y(w)$.

    2. For every $x \in Y(w)$, it is guaranteed to survive the pruning step if it belongs to no set of 3 distinct elements $\{x,y,z\} \in Y(w)$, which form a 3-term arithmetic progression (in some order). The number of 3-element subsets of $X$ that contain $x$ and that form a 3-term arithmetic progression is bounded by $3*[(q+1)/4]^n$. (For each digit of $x$ and each position that $x$ could occupy in the arithmetic progression, there are at most (q+1)/4 three-term arithmetic progressions made up of digits in [0,(q+1)/2] that include that digit in that position.) For any $b \in B$ and any such set $\{x,y,z\}$, the probability that $w \cdot x = w \cdot y = w \cdot z = b$ is at most $p^{-2}$. (The case $b=0$ is a corner case; to exclude this case, choose B so that it doesn’t include zero.) So, the probability that $x$ is eliminated in the pruning step is at most $3*|B|*p^{-2}*[(q+1)/4]^n = (|B|/p) * (3*[(q+1)/4]^n*p^{-1})$. By our choice of $p$, this is less than $(3/4)*(|B|/p)$.

    This means that every nonzero $x \in X$ has probability at least $|B|/(4p)$ of belonging to $Z(w)$. Since $|B| = p^{1-o(1)}$, this means the expected fraction of elements of $X$ that are included in $Z(w)$ is $p^{-o(1)} = q^{-o(n)}$. Hence, the expected cardinality of our 3AP-free set $Z(w)$ is $[(q+1)/2]^{n-o(n)}$.

    The reason I think this bound might be better than what was previously known for $q>4$ is that I saw Gil writing (either in his blog or in a comment on someone else’s blog) that he wonders whether the maximum size of 3AP-free sets in $F_p^n$ is something like $r_3(p)^n$. Whereas this argument shows that it’s definitely larger. In fact, both the lower and upper bounds now have the form $(c*p)^n$ for some constant $c$ which is equal to $1/2$ in the lower bound I just presented, and less than or equal to $e^{-1/18}$ in the upper bound presented by Ellenberg and Gijswijt.

  16. P.S. David, regarding your comment above that “still, 2.748 is a lot better than 2.2”, isn’t it an apples-to-oranges comparison? We now have tri-colored sum-free sets with exponent 2.748 but I don’t know any construction over 3AP-free sets in (F_3)^n that beats the old lower bound of roughly (2.2)^n. Did I miss something in an earlier comment that allows to transform multi-colored sum-free set constructions into 3AP-free set constructions?

  17. David Speyer says:

    Let me see if I understand.

    Let Y be the set of “carry free triples”. As you say, Y has size precisely \binom{4}{2}^n = 6^n. For almost all (a,b,c) in Y, the digits of a are distributed as (0,1,2) with probability (1/2, 1/3, 1/6). There are (2^{1/2} 3^{1/3} 6^{1/6})^n = (2.749)^n points a with this digit distribution. Given an a with this digit distribution, there are 3^{n/2} 2^{n/3} ways to complete it to (b,c). As a sanity check, (2^{n/2} 3^{n/3} 6^{n/6}) (3^{n/2} 2^{n/3}) = 2^{5n/6} 3^{5n/6} 6^{n/6} = 6^n.

    Thus, the number of pairs of triples (a,b,c) and (a,b’,c’) both in Y is roughly (2^{n/2} 3^{n/3} 6^{n/6}) (3^{n/2} 2^{n/3})^2 = (13.09)^n. Call the set of such pairs Z.

    Now we run your argument again and get a lower bound of

    Y B/M^2 – 3 Z B / M^3

    The optimal choice is to make B and M grow like Z/Y = 2.18^n or, more precisely, (2^{1/3} 3^{1/2})^n. We get a bound like Y^2/Z which is, as you say, 2.749^n or, if we had carried the radicals around the whole time, (2^{1/2} 3^{1/3} 6^{1/6})^n.

    The reason we are not matching Jordan’s bound exactly is that the typical point a with digit sum 2n/3 has digits distributed with some probabilities other than (1/2, 1/3, 1/6), so this argument wants us to concentrate on small fraction of the monomials that Jordan’s argument wants. When q=2 we get lucky and sampling a conditioned on a+b=c carry-free is very close to sampling a conditioned on digit sum n/3.

    Jordan, let me know if you’d like us to move this conversation to e-mail so we don’t tie up your blog.

  18. David Speyer says:

    You are right, I don’t know how to translate multi-colored to 3-AP free. Sorry for losing track of that! Now to read your longer comment…

  19. David Speyer says:

    The two bounds stay close. The upper bound is \min_{u>0}  \sum_{j=0}^{q-1} u^{j-(q-1)/3}. For q = 2, 3, 4, 5, 7, 8, 9, 11 I get

    1.88988, 2.7551, 3.61072, 4.46158, 6.1562, 7.00155, 7.84612, 9.53369 .

    The lower bound is \prod_{j=1}^{q} (S/j)^{j/S}, where S = 1+2+\cdots+q. For the same values of q, I get

    1.88988, 2.74946, 3.59612, 4.43599, 6.10506, 6.9365, 7.76666, 9.42434

  20. Gil Kalai says:

    Noga Alon pointed out to me that a Behrend construction for Z_p^n does better than r_3(p)^n when n is above \log p.

  21. David Speyer says:

    If we put u = e^{a/q}, as q gets larger, we can approximate the sum by q \int_{-1/3}^{2/3} e^{a t} dt = q \frac{e^{2/3 a}-e^{-1/3 a}}{a}. The minimum occurs at a=-2.14913 with value 0.841434 q.

    On the other hand, the lower bound is S/H(q)^{1/S} where H(q) is the hyperfactorial . We have H(q) \approx q^{S + 1/12} \exp^{-q^2/4} so H(q)^{1/S} \approx q e^{-1/2} and S/H(q)^{1/S} \approx q e^{1/2}/2. Numerically, this is 0.824361 q. (I wonder if mathworld could have a typo though? Even for q=101, I am numerically getting 0.837 which seems pretty far from 0.824.)

  22. JSE says:

    David, are you kidding? This is the most action my blog has seen in months!

  23. David Speyer says:

    I’m going to start a new thread to throw out a wild speculation: Could 2.749 be the right bound? Here is some nonsense about how we might try to lower the upper bound to 2.749. I’ll use Terry’s symmetrical formulation of the proof. (

    Let J be the set of lattice points in [0,2]^n with coordinate sum \leq 2n/3. Fix some small \epsilon and let K be the set of such lattice points where the numbers of ones and twos are respectively \leq n(1/3+\epsilon) and \leq n(1/6+\epsilon). The sizes of J and K are roughly 2.755^n and 2.749^n respectively.

    Let F(x,y,z) = \prod_j 1-(x_j+y_j+z_j)^2. The key to Terry’s formulation of the argument is that F (and indeed, any degree 2n polynomial) can be written in the form \sum_{a \in J} x^a f_a(y,z) +  \sum_{b \in J} y^b g_b(x,z) +  \sum_{c \in J} z^c h_c(x,y). If we could replace J by K, we’d prove 2.749^n instead.

    Now, of course, the statement isn’t true for K. But it is true that the overwhelming majority of the monomials of degree \leq 2n factor as x^a y^b z^c with a, b and c all in K. Maybe we can somehow get the rest of the stragglers organized into not too many terms?

  24. David Speyer says:

    I think I can raise Prof. Kleinberg’s construction to match the EG upper bound of 2.755! (In other words, ignore the previous post.) First, I will reprove the 2.749 lower bound in new language, then I’ll explain what I want to tweak.

    As before, let Y be the set of carry-free additions, and let Z be the set of pairs ((a,b,c), (a,b’,c’)) with (a,b,c) and (a,b’,c’) in Y. Choose a large prime M and an larger integer V with V/M near (1/4) Y^2/Z, so roughly 2.749^n. In our new framework, it won’t be as important how large we individually take V and M, it is the ratio that matters. But we will want V less than Y = 6^n and M more than 2n. Choose B an AP-free subset of the cyclic group of order M with B/M = e^{o(n)}.

    Randomly choose V distinct elements of Y (which is why we want V less than Y) and reject any whose image under h_0, h_1, h_2 does not lie in B. (Again, we are averaging over w, with this step held invisible.) The expected number of survivors is V B/M^2 — the only way to land in B^3 is for h_0 = h_1 = h_2 all in B. The expected number of pairs ((a,b,c), (a, b’, c’)) in Z for which both of (a,b,c) and (a,b’,c’) are chosen and survive is Z (V/Y)^2 B/M^3. Deleting one element from each such pair, and the same with overlaps in the second and third spot, we obtain a multi-colored sumfree set of expected size

    (V/M) (B/M) – 3 Z/Y^2 (V/M)^2 (B/M). Since V/M is roughly (1/4) Y^2/Z, this is
    (1/16) (Y^2/Z) (B/M). Since B/M is subexponential, we win.


    But this just proves the same bound as before. The benefit comes if we sample V not uniformly at random, but according to a cleverly chosen distribution. Set r = 0.18085763074788724. Among elements in (0,1,2)^n with sum 2n, the greatest number have 1/3+r zeroes, 2/3-2 r ones and r twos. This time, we will set V/M to be (1/4) H, where H is points of (0,1,2)^n with sum 2n. Choose B as before.

    Again draw V elements from Y, but this time draw each of the n coordinates of (a,b,c) with the distribution which is (0,0,0), (2,0,2) or (0,2,2) with probability r and (0,1,1), (1,0,1) or (1,1,2) with probability 1/3-r. Now the marginal distributions on a, b and c are the maximal entropy distributions! (I am going to ignore the possibility of drawing the same element of Y twice; we’ll get back to this.)

    Again, reject from V those triples whose image under (h_0, h_1, h_2) is not in B^3. The expected number of survivors is still V B/M^2. But the probability of colliding in an element of Z is now reduced. The probability that two elements of V have the same a is now 1/H, so the expected number of pairs in Z that we hit is V^2/H * B/M^3.

    The computation finishes as before :

    V B/M^2 – 3 V^2/H * B/M^3 = ( (V/M) – 3 (V/M)^2*(1/H) ) *(B/M) = 1/(16) H * (B/M) .

    We win!

  25. Tom Church says:

    David, just a comment: you’ve been comparing the lower bounds you’re getting with the 2.2^n from Edel’s examples of progression-free sets. But this isn’t the right comparison, since there are much larger constructions of tricolored sum-free sets! Quoting from the last page of my recent paper with Blasiak-Cohn-Grochow-Umans:

    “It is worth noting that attempts to improve the upper bound in [the Ellenberg-Gijswijt result] must contend with the fact that the known lower bound for multicolored sum-free sets in \mathbb{F}_n^3 is (2^{4/3} - o(1))^n = (2.519...- o(1))^n from Alon-Shpilka-Umans, p. 238, which is substantially larger than the cap set construction of (2.217... - o(1))^n obtained by [Edel] or the refinement of Edel’s construction obtained by Naslund (personal communication).”

    However, the fact that the old record was 2.519^n will not come as a surprise to your interlocutor Kleinberg — the way Alon-Spilka-Umans find that lower bound is by analyzing a construction from his earlier Cohn-Kleinberg-Szegedy-Umans paper.

  26. David Speyer says:

    Thanks Tom!

    On the other hand, I have good news as well. I believe I can extend the argument I just gave for q=3 to all primes. I’m writing up a document now.

  27. David, your construction of near-optimal multi-colored sum-free sets in F_3^n is nice! A very minor correction to what you wrote: in the paragraph that begins, “But this just proves the same bound as before…” you twice wrote “with sum 2n” when I think you meant “with sum 2n/3”.

    BTW, please feel free to call me “Bobby” as opposed to “Prof. Kleinberg”. (I use the more formal “Robert” on the by-line of my papers, but colleagues always refer to me as Bobby in person.)

  28. David Speyer says:

    Thanks, and likewise call me David!

  29. David Speyer says:

    I think this argument needs a little more tweaking. We could have (a,b,c), (a’, b’, c’) and (a”, b”, c”) in V whose image is in B^3, but where, in some coordinates, a_i+b’_i = c”_i+3, so this violates the sum free condition one we reduce modulo 3. This seems like a good time for me to stop; I hadn’t planned to work this late.

  30. David, I think one can rule out the case you wrote down by tweaking the construction a little bit. When sampling V we can impose a further constraint that we only keep the sampled triples (a,b,c) such that a and b have coordinate-sum 2n/3 and (consequently) c has coordinate-sum 4n/3. Throwing away all other triples reduces the size of V relative to your construction, but only by a poly(n) factor so it doesn’t affect the base of the exponent in any of the subsequent calculations. And it eliminates the possibility of finding three triples (a,b,c), (a’,b’,c’), and (a”,b”,c”) in V such that a_i + b’_i = c”_i + 3 in some coordinate.

  31. David Speyer says:

    Yup, I agree. There is also another thing I need to tweak: We need to consider the possibility that (a,b,c), (a’,b’,c’) and (a”, b”, c”) are in V and a+b’+c”=0 but (a,b’,c”) wasn’t selected for V. Fortunately, we still have the absolute sizes of V and M available for tweaking, and I get that we are still okay if we choose M so that H M^2 has the same growth rate as Y.

    By the way, I think it is probably prettier to work with a+b+c=0 than a+b=c. (I made that change in the draft I sent you last night.)

    I’m taking care of my kids today, so I won’t be on the internet again until evening.

  32. Gil Kalai says:

    Just to elaborate on this comment:

    We can consider all vectors (x_1,x_2,\dots,x_n) with 0\le x_i\le (p-1)/2 and consider them in In \mathbb Z_p^n. Next we can take all these vectors where \sum x_i^2=C. (We can optimize on C or just use pigeonhole.) This will give us a set without 3-term AP of size behaving like \frac{p^n}{2^np^2n}. This can be compared to the upper bound we now have of the form \frac{p^n}{2^{cn}}. We see that unless p=o(\log n) this is larger than r_3(n). I am not sure what should be the precise guess for general p,n but it is reasonable that we should take an interpolation of the Behrend-type construction when n= \Omega (\log n) and the Behrend construction when n=1. This shows that the most naive thought about r_3(p)^n bound is entirely incorrect. But the Behrend nature of constructions for large and small values of n is still interesting. It is not impossible but still far-fetched that we can say something interesting when n goes to infinity and p goes much quicker to infinity.

  33. Gil Kalai says:

    (I meant: We see that unless p=o(\log n) this is larger than r_3(p)^n.)

  34. Thanks, Gil. The Behrend-type construction is so much simpler and more elegant than my [(p+1)/2]^{n-o(n)} that, in hindsight, I feel a bit silly for proposing mine!

  35. JSE says:

    What I find quite striking about this Behrend construction is that it emphasizes the relation with the real numbers. Over a finite field, high-dimensional conics contain lots of lines. Only over the reals do you have conics, like the sphere, which don’t contain lines, for reasons having fundamentally to do with positivity. (e.g. if you use an indefinite quadratic form in place of sum x^2, the whole thing falls apart!) The interval [0,q/2] is somehow a piece of the real line sitting inside F_q, allowing you to use facts about positivity in a context where they might at first seem alien.

  36. Gil Kalai says:

    We can try to turn the table around and use this Behrend type construction to relate the integral and cap set 3-AP free sets.If we have a subset A of {1,2,..,N} of density rho we can associate to it a subset of X==Z_p^n where N=p^n. Now, we want to consider subset of X where in each coordinate we have a consecutive interval of elements in Z_p. If we could work with consecutive mod p we could maintain the density. But we need really consecutive. Then a 3AP in this subset will be really a 3-AP in the integers (I think).

    With really consecutive we can loose a factor of 2^n which is not good for us. (because the bounds are higher than p^n/2^n. (We loose this if we take the smallest and largest p/4 numbers in each coordinates.

    But perhaps the point is that we can choose p for the good purpose of getting a higher density. We cannot have such an example (it seems) for many ps simultaneously. So the question how to maximize the density of a really consecutive box of size 2^p when we range over all p in a relevant interval.

  37. David Speyer says:

    A quick update: Robert and I are going to write something together. We would like our result to be that there are 3-colored sum free sets in F_q^n of size C(q)^(n-o(n)) where C(q) is the same as in the EG upper bound C(q)^(n+o(n)).

    Of course, we may find other issues while writing, but at the moment we only know of one obstacle. We have to construct a probability distribution on {(i,j,k) : i+j+k = q-1, i,j,k >= 0} with certain marginals. (For q=3, I wrote this down in my comment of 7:21 PM.) For any individual q, this is a matter of linear programming, and I have checked that the linear program is feasable for q \leq 9. In general we need to think harder; my belief that I could get it by a simple maximization argument last night did not pan out. If anyone wants to answer, we’d be happy.

    We currently have no good strategies for getting 3-AP free sets, rather than 3-colored sum free sets, but we aren’t giving up.

    Just writing this because I claimed a solution above, and I want to be clear about what we do and do not know.

  38. […] M(F_q^n).  In fact, G_3(F_q^n) is bounded above by M(F_q^n), too (see the note linked from this post) and the argument is only a  modest extension of the proof for G_1.  For all we know, G_1(F_q^n) […]

  39. Seva says:

    On problem with the argument in the “first update” is that the matrix $P(s+t)$, having one non-zero element in every row, may fail to have one non-zero element in every column. It may well happen that all non-zero elements are concentrated in a small number of columns, in which case the rank of the matrix will be small and we will not get any contradiction. Am I right?

  40. […] and Yves Edel [6] Zero-sum problems in finite abelian groups and affine caps, by Edel et al. [7] Sumsets and sumsets of subsets, by Jordan […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: