## Dissecting squares into equal-area triangles: idle questions

Love this post from Matt Baker in which he explains the tropical / 2-adic proof (in fact the only proof!) that you can’t dissect a square into an odd number of triangles of equal area.  In fact, his argument proves more, I think — you can’t dissect a square into triangles whose areas are all rational numbers with odd denominator!

• The space of quadrilaterals in R^2, up to the action of affine linear transformations, is basically just R^2, right?  Because you can move three vertices to (0,0), (0,1), (1,0) and then you’re basically out of linear transformations.   And the property “can be decomposed into n triangles of equal area” is invariant under those transformations.  OK, so — for which choices of the “fourth vertex” do you get a quadrilateral that has a decomposition into an odd number of equal-area triangles? (I think once you’re not a parallelogram you lose the easy decomposition into 2 equal area triangles, so I suppose generically maybe there’s NO equal-area decomposition?)  When do you have a decomposition into triangles whose area has odd denominator?
• What if you replace the square with the torus R^2 / Z^2; for which n can you decompose the torus into equal-area triangles?  What about a Riemann surface with constant negative curvature?  (Now a “triangle” is understood to be a geodesic triangle.)  If I have this right, there are plenty of examples of such surfaces with equal-area triangulations — for instance, Voight gives lots of examples of Shimura curves corresponding to cocompact arithmetic subgroups which are finite index in triangle groups; I think that lets you decompose the Riemann surface into a union of fundamental domains each of which are geodesic triangles of the same area.

## Squares and Motzkins

Greg Smith gave an awesome colloquium here last week about his paper with Blekherman and Velasco on sums of squares.

Here’s how it goes.  You can ask:  if a homogeneous degree-d polynomial in n variables over R takes only non-negative values, is it necessarily a sum of squares?  Hilbert showed in 1888 that the answer is yes only when d=2 (the case of quadratic forms), n=2 (the case of binary forms) or (n,d) = (3,4) (the case of ternary quartics.)  Beyond that, there are polynomials that take non-negative values but are not sums of squares, like the Motzkin polynomial

$X^4 Y^2 + X^2 Y^4 - 3X^2 Y^2 Z^2 + Z^6$.

So Greg points out that you can formulate this question for an arbitrary real projective variety X/R.  We say a global section f of O(2) on X is nonnegative if it takes nonnegative values on X(R); this is well-defined because 2 is even, so dilating a vector x leaves the sign of f(x) alone.

So we can ask:  is every nonnegative f a sum of squares of global sections of O(1)?  And Blekherman, Smith, and Velasco find there’s an unexpectedly clean criterion:  the answer is yes if and only if X is a variety of minimal degree, i.e. its degree is one more than its codimension.  So e.g. X could be P^n, which is the (n+1,2) case of Hilbert.  Or it could be a rational normal scroll, which is the (2,d) case.  But there’s one other nice case:  P^2 in its Veronese embedding in P^5, where it’s degree 4 and codimension 3.  The sections of O(2) are then just the plane quartics, and you get back Hilbert’s third case.  But now it doesn’t look like a weird outlier; it’s an inevitable consequence of a theorem both simpler and more general.  Not every day you get to out-Hilbert Hilbert.

Idle question follows:

One easy way to get nonnegative homogenous forms is by adding up squares, which all arise as pullback by polynomial maps of the ur-nonnegative form x^2.

But we know, by Hilbert, that this isn’t enough to capture all nonnegative forms; for instance, it misses the Motzkin polynomial.

So what if you throw that in?  That is, we say a Motzkin is a degree-6d form

expressible as

$P^4 Q^2 + P^2 Q^4 - 3P^2 Q^2 R^2 + R^6$

for degree-d forms P,Q,R.  A Motzkin is obviously nonnegative.

It is possible that every nonnegative form of degree 6d is a sum of squares and Motzkins?  What if instead of just Motzkins we allow ourselves every nonnegative sextic?  Or every nonnegative homogeneous degree-d form in n variables for n and d less than 1,000,000?  Is it possible that the condition of nonnegativity is in this respect “finitely generated?”

## Inscribed squares: Denne speaks

I recently learned that Elizabeth Denne of Smith has made some recent progress on the inscribed square problem in joint work with Jason Cantarella (Georgia) and John McCleary (Vassar). Since she knows this problem, its history, its variants, and her own contributions much better than I do, I asked her if she’d be willing to make a guest post: and here it is!

The problem:
A) Given a simple closed curve in R^2 (Jordan curve) are there four points on the curve which are the vertices of a square?
B) Given a closed curve in R^2 are there four points on the curve which are the vertices of a square?

Currently I am collaborating with Jason Cantarella (UGA) and John McCleary (Vassar College) on this problem.
First off, both problems are still open for a continuous map of S^1 into R^2. Secondly, I know of no counterexamples to the problem and believe the answer to problem A is yes. It appears (as John Cowan (7/25/07) commented) that problem B is not interesting, as parts of the curve could be “cut off”. So I’m not sure of the literature on B. I do think the problem is interesting for the following reasons. Don’t “cut off” parts of the loop and still try and solve the problem. In the Jordan curve case (A) when considering vertices ABCD in order about the square , they will match their order along the curve. In the case of problem B, this won’t necessarily be the case. (Just think about the Figure 8 example that John Cowan suggests.)

So from now on I’ll just consider problem A. All bloggers went straight to the heart of the issue. Namely, once you know that there is an inscribed square for some kind of Jordan curve (polygonal, smooth) you can’t necessarily show that every Jordan curve has an inscribed square. The reason that inscribed squares in the approximating curves may shrink to zero in the limit.

I’ll now describe what the known results are, repeating some of what others have written.

## Yet more inscribed squares: a word from Moscow, Idaho

Somehow while writing the first two posts I managed to miss this clear and thorough write-up about the inscribed square problem by Mark Nielsen, a math professor at the University of Idaho. From his page one learns that indeed every polygon has an inscribed square. In fact, every curve which is not somehow horrifically fractal does have an inscribed square — this is a theorem of Walter Stromquist from 1989. Nielsen also includes many related theorems, references, exercises, and a few toothsome unsolved problems at the bottom of the page. Go look!

## More inscribed squares

Another thought about the problem of showing that every curve has an inscribed square. My guess is that more is true: that in fact every closed curve in the plane has an odd number of inscribed squares. In particular, this number cannot be zero!

Why make this guess? Because this is often the kind of behavior that one sees when trying to solve problems concerning real numbers. The real numbers are great for certain things (like measuring quantities of real objects!) but from any healthy mathematical perspective they are a bit of a mess. The complex numbers are much nicer.

Here’s an analogy that might be convincing. Start with this fact: if P(x) is a polynomial of odd degree, then P(x) has a root: that is, there exists some real number x such that P(x) = 0. How do you know this? Well, write

P(x) = a_n x^n + …. + a_1 x + a_0.

and suppose that a_n is positive. Then for x very, very large, the a_n x^n term will be much larger than all the others put together, and P(x) will be positive. But for x very, very small (that is, a negative number very far from 0), the a_n x^n term will be much _smaller_ than all the others put together, and P(x) will be negative. But if P(x) is negative for small x and positive for large x, it must be 0 somewhere in between!

But more is true: not only does there exist a root, the number of roots is odd. The argument is exactly the same. On the far left end of the number line, P(x) is negative. On the far right end, P(x) is positive. Thus, P(x) must switch signs an odd number of times as x moves from left to right. But each sign-switch is a point x where P(x) = 0, which is to say, a root. (Eagle-eyed readers will note that I’ve glossed over the issue of what happens when P is tangent to the x-axis at some point.) Anyway, I looked this up on MathSciNet to see if it was reasonable and found a 1961 paper of R.P. Jerrard which proves that, for plane curves defined by real-analytic functions (never mind the definition if you don’t know it, just take it to mean “in an interesting special case”) there are indeed an odd number of inscribed squares.

Question: Is there an easy proof that, say, a triangle has an odd number of inscribed squares? What about a quadrilateral? (One should be more careful and say “generic” triangle and “generic” quadrilaterals; there ought to be certain special choices of polygon for which the answer is negative. For instance, rectangles have infinitely many inscribed squares, so the question doesn’t even make sense.)

Update: WordPress seems to have eaten part of this post for reasons I don’t quite understand. Now fixed.

## Inscribed squares

Here’s a conjecture that’s been open for more than a hundred years. Prove that every closed curve in the plane has an inscribed square. To unpack that for non-math readers: suppose I draw some loop on a piece of paper, which might be really complicated, cross over itself, zig-zag back and forth a lot, doesn’t matter — as long as it eventually returns to its starting point. Then the problem is to show that, no matter how crazy the loop, you can always find four points on the loop which form a perfect square.

It’s a fun exercise to try to convince yourself that this is even plausible! Also, the version of the problem with “square” replaced by “equilateral triangle” is much easier.

## How I won the talent show

So I don’t want to brag or anything but I won the neighborhood talent show tonight.  Look, here’s my trophy:

My act was “The Great Squarerootio.”  I drank beer and computed square roots in my head.  So I thought it might be fun to explain how to do this!  Old hat for my mathematician readers, but this is elementary enough for everyone.

Here’s how it works.  Somebody in the audience said “752.”  First of all, you need to know your squares.  I think I know them all up to 36^2 = 1296.  In this case, I can see that 752 is between 27^2 = 729 and 28^2 = 784, a little closer to 729.  So my first estimate is 27.  This is not too bad:  27^2 = 729 is 23 away from 752, so my error is 23.

Now here’s the rule that makes things go:

new estimate = old estimate + error/(2*old estimate).

In this case, the error is 23 and my old estimate is 27, so I should calculate 23/2*27; well, 23 is just a bit less than 27, between 10 and 20% less, so 23/2*27 is a bit less than 1/2, but should be bigger than 0.4.

So our new estimate is “27.4 something.”

Actual value of 27 + 23/54: 27.4259…

so I probably would have gotten pretty close on the hundredth place if I’d taken more care with estimating the fraction.  Of course, another way to get even closer is to take 27.4 as your “old estimate” and repeat the process!  But then I’d have to know the square of 27.4, which I don’t by heart, and computing it mentally would give me some trouble.

Why does this work?  The two-word answer is “Newton’s method.”  But let me give a more elementary answer.

Suppose x is our original estimate, and n is the number we’re trying to find the square root of, and e is the error; that is, n = x^2 + e.

We want to know how much we should change x in order to get a better estimate.  So let’s say we modify x by d.  Then

$(x+d)^2 = x^2 + 2xd + d^2$

Now let’s say we’ve wisely chosen x to be the closest integer to the square root of n, so d should be pretty small, less than 1 at any rate; then d^2 is really small.  So small I will decide to ignore it, and estimate

$(x+d)^2 = x^2 + 2xd$.

What we want is

$(x+d)^2 = n = x^2 +e$.

For this to be the case, we need e = 2xd, which tells us we should take d = e/2x; that’s our rule above!

Here’s another way to think about it.  We’re trying to compute 752^(1/2), right?  And 752 is 729+23.  So what is (729+23)^(1/2)?

If you remember the binomial theorem from Algebra 2, you might remember that it tells you how to compute (x+y)^n for any n.  Well, any positive whole number n, right?  That’s the thing!  No!  Any n!  It works for fractions too!  Your Algebra 2 teacher may have concealed this awesome fact from you!  I will not be so tight-lipped.

The binomial theorem says

$(x+y)^n = x^n + {n \choose 1} x^{n-1} y + {n \choose 2} x^{n-2} y^2 + \ldots +$

Plugging in x=729,y=23,n=1/2 we get

$(729+23)^{1/2} = 729^{1/2} + {1/2 \choose 1} 729^{-1/2} \cdot 23 + {1/2 \choose 2} 729^{-3/2} \cdot 23^2 + \ldots$

Now 729^{1/2} we know; that’s 27.  What is the binomial coefficient 1/2 choose 1?  Is it the number of ways to choose 1 item out of 1/2 choices?  No, because that makes no sense.  Rather:  by “n choose 1” we just mean n, by “n choose 2” we just mean (1/2)n(n-1), etc.

So we get

$(729+23)^{1/2} = 27 + (1/2) \cdot 23 / 27 + (-1/8) \cdot 23^2 / 27^3 + \ldots$

And look, that second term is 23/54 again!  So the Great Squarerootio algorithm is really just “use the first two terms in the binomial expansion.”

To sum up:  if you know your squares up to 1000, and you can estimate fractions reasonably fast, you can get a pretty good mental approximation to the square root of any three-digit number.  Even while drinking beer!  You might even be able to beat out cute kids in your neighborhood and win a big honking cup!

Tagged ,

## Gaileo, Schrodinger, countability, and the golden age of newspaper math

I somehow never realized that the puzzling fact that infinite sets could be in bijection with proper subsets was as old as Galileo:

Simplicio: Here a difficulty presents itself which appears to me insoluble. Since it is clear that we may have one line greater than another, each containing an infinite number of points, we are forced to admit that, within one and the same class, we may have something greater than infinity, because the infinity of points in the long line is greater than the infinity of points in the short line. This assigning to an infinite quantity a value greater than infinity is quite beyond my comprehension.
Salviati: This is one of the difficulties which arise when we attempt, with our finite minds, to discuss the infinite, assigning to it those properties which we give to the finite and limited; but this I think is wrong, for we cannot speak of infinite quantities as being the one greater or less than or equal to another. To prove this I have in mind an argument which, for the sake of clearness, I shall put in the form of questions to Simplicio who raised this difficulty.
I take it for granted that you know which of the numbers are squares and which are not.
Simplicio: I am quite aware that a squared number is one which results from the multiplication of another number by itself; this 4, 9, etc., are squared numbers which come from multiplying 2, 3, etc., by themselves.
Salviati: Very well; and you also know that just as the products are called squares so the factors are called sides or roots; while on the other hand those numbers which do not consist of two equal factors are not squares. Therefore if I assert that all numbers, including both squares and non-squares, are more than the squares alone, I shall speak the truth, shall I not?
Simplicio: Most certainly.
Salviati: If I should ask further how many squares there are one might reply truly that there are as many as the corresponding number of roots, since every square has its own root and every root its own square, while no square has more than one root and no root more than one square.
Simplicio: Precisely so.
Salviati: But if I inquire how many roots there are, it cannot be denied that there are as many as the numbers because every number is the root of some square. This being granted, we must say that there are as many squares as there are numbers because they are just as numerous as their roots, and all the numbers are roots. Yet at the outset we said that there are many more numbers than squares, since the larger portion of them are not squares. Not only so, but the proportionate number of squares diminishes as we pass to larger numbers, Thus up to 100 we have 10 squares, that is, the squares constitute 1/10 part of all the numbers; up to 10000, we find only 1/100 part to be squares; and up to a million only 1/1000 part; on the other hand in an infinite number, if one could conceive of such a thing, he would be forced to admit that there are as many squares as there are numbers taken all together.
Sagredo: What then must one conclude under these circumstances?
Salviati: So far as I see we can only infer that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is infinite; neither is the number of squares less than the totality of all the numbers, nor the latter greater than the former; and finally the attributes “equal,” “greater,” and “less,” are not applicable to infinite, but only to finite, quantities. When therefore Simplicio introduces several lines of different lengths and asks me how it is possible that the longer ones do not contain more points than the shorter, I answer him that one line does not contain more or less or just as many points as another, but that each line contains an infinite number.

This came to my attention because I’m procrastinating by looking at the August 1951 issue of The Times Review of the Progress of Science, a quarterly supplement to the British newspaper.  This issue features an article by Schrödinger which lays out the modern theory of transfinite cardinals, including Cantor’s diagonal argument, the bijection between the line segment and the square, and the hierarchy of infinities obtained by iterating “power set.”  Can you imagine a mathematical exposition of similar depth appearing in the Science Times today?

Schrödinger’s closing paragraph is striking:

While these higher infinities have not hitherto acquired half the importance of the two that we have been studying here, the physicist is keenly interested in the probable bearing of the startling properties of the continuous infinite on the theories of atoms and energy quanta.  I consider these theories a weapon of self-defence, contrived by the mind against the “mysterious continuum.”  This does not mean that these theories are pure invention, not founded on experiment.  It is, however, fairly obvious that in their historical development some part was played by the desire to replace the continuous by the countable infinite, which is easier to handle.  To disentangle the influence of this mental urge on the interpretation of experimental evidence is a task for the future.”

Can someone with more knowledge of quantum theory than me explain what Schrödinger might have meant?

## Square pegs, square pegs. Square, square pegs.

Lately I’ve been thinking again about the “square pegs” problem:  proving that any simple closed plane curve has an inscribed square.  (I’ve blogged about this before: here, here, here, here, here.)  This post is just to collect some recent links that are relevant to the problem, some of which contain new results.

Jason Cantarella has a page on the problem with lots of nice pictures of inscribed squares, like the one at the bottom of this post.

Igor Pak wrote a preprint giving two elegant proofs that every simple closed piecewise-linear curve in the plane has an inscribed square.  What’s more, Igor tells me about a nice generalized conjecture:  if Q is a quadrilateral with a circumscribed circle, then every smooth simple closed plane curve has an inscribed quadrilateral similar to Q.  Apparently this is not always true for piecewise-linear curves!

I had a nice generalization of this problem in mind, which has the advantage of being invariant under the whole group of affine-linear transformations and not just the affine-orthogonal ones:  show that every simple closed plane curve has an inscribed hexagon which is an affine-linear transform of a regular hexagon.  This is carried out for smooth curves in a November 2008 preprint of Vrecica and Zivaljevic.  What’s more, the conjecture apparently dates back to 1972 and is due to Branko Grunbaum.  I wonder whether Pak’s methods supply a nice proof in the piecewise linear case.