What experimental math taught me about my intuition

The project I’m working on with David Brown and Bryden Cais is the first thing I’ve ever done involving computational experiments as a serious part of the development of the ideas.  We’re trying to formulate a reasonable conjecture about the limiting distribution of some arithmetic objects, and I thought we’d arrived at a pretty good heuristic — call it conjecture A — which fit nicely in a context with other conjectures and theorems about similar objects.

But the data we’d collected didn’t fit conjecture A very well.  In fact, when we looked at the data carefully, it appeared to be pointing strongly at conjecture B, which has a similarly clean formulation but for which we didn’t have any theoretical justification.

So I spent much of the weekend thinking about this, and by the end of it, I felt pretty confident that conjecture B was pretty reasonable — even, in a way, more reasonable than conjecture A — though we still didn’t have a strong justification for it.

Today it turned out that there was a mistake in the data collection, and conjecture A looks good after all.   But it’s a sobering reminder that my intuitions about “what ought to be true,” which I think of as rather rigorous and principled, are in fact quite malleable.

 

 

Tagged ,

5 thoughts on “What experimental math taught me about my intuition

  1. In the work I did Dylan Thurston, we started out with two models of a “random tunnel-number one 3-manifold”, one based on the mapping class group and the other on measured laminations. We were looking at the probability p that such a manifold fibers over the circle, and our intuition was that 0 < p < 1 because it's easy to show that this is the case for the analogous question about two-generator one-relator groups. But when we did the experiments, as the manifolds become more and more complicated, the probability was clearly tending toward 1 in the first model and 0 in the second. Which was even more confusing, since we expected that at the very least the answer should be the same. Indeed, it is the same, namely p=0, but the issue was that our examples were too small in the first model for the asymptotics to kick in. Eventually, we found a much more efficient algorithm which made this clear experimentally, and that same algorithm was the basis of the proof we gave that p = 0 (our paper only does this for the second model, but Joseph Maher has shown this is also the case for the first one).

  2. Michael Rubinstein says:

    I could go on and on about this, but one point, which you learned the hard way, is always to doubt your program over yourself. In fact, that’s how bugs are usually found- when a program produces output that is not consistent with our expectations. A program is not bug free because it has been rigorously checked, but because its output meets our expectations.

    ll established programs have lists of bugs that are `known’ to the public, and then those that are kept private, either because companies don’t like to share or developers have decided, to keep it that way- perhaps they’re too busy to take the care to describe the bug the right way in the right forum (if there is such a place). Developers are always releasing new versions of their programs, not just to add features, but to improve algorithms, and also to fix bugs. Magma, Mathematica, Maple are all close sourced so that you have no idea what’s going on inside- harder to gauge correctness of computation. Sage makes use of many people’s code, so the level of rigour and the following of strict coding principles varies (in some aspects, my own code is somewhat amateur- not too well documented, and I assume that the functions will be called correctly- not too much error checking at the call level. Over time I’ve been improving that). There are also issues of hardware bugs, the most famous being the intel division bug. Odlyzko found several system bugs (hardware too?) with the cray machine he was using when he did his zeta computations.

    The question of bugs and rigour has a lot to do with the complexity and size of a program. Obviously shorter programs (one liners, for instance) using well tested routines in a traditional and not extreme way are more trustworthy.

    When a mathematian writes a program to gather data to verify a conjecture or as part of the solution to a problem (ex finite computation), if it is more than a handful of lines, it hardly ever works the first time. It has only happened maybe twice in my life that I wrote a program, more than a page long, and had it work on first try. Actually, most of the time I usually compile and try things a few lines at a time, so the opportunity to do this macho `lets get it all to work all at one shot’ is limited, specifically because not doing so makes it harder to figure out what’s wrong.

    Typical bugs: typos (ex in variables or functions, parentheses), mixing up variables (ex wrong loop index, sending wrong thing to a function), abusing memory, computing the right thing but outputting the wrong thing, using routines outside their assumed use, not accounting for loss of numerical precision (ex accumulated round off error, cancellation), and on and on

    The hardest bug I ever had which consumed three full days of my life and drove me nuts had to do with my paper with Bjorn (counting intersection points of ngons). But I’ll have to describe that later.

  3. Richard Séguin says:

    I once wrote some code in Fortran that did Guassian elimination with iterative error correction. (This is actually remarkably difficult to write without making some kind of an error, especially using a C variant language.) As a test I used it to do fit trigonometric polynomials to sets of data. With the data I used it always appeared to converge smoothly to a good solution. After reviewing the code though, I stumbled on a serious error in the elimination code. After correcting the error, the code converged to the same solutions, only much faster. The iterative error correction was compensating for the error, at least with the particular data that I was testing with, giving me false confidence in the correctness of the code.

  4. [...] saw quomodcumque reveal what he learned from experimental mathematics, Maxwell’s Demon remember Foyle’s Mathematics room, Algorithms, game theory … etc [...]

  5. [...] JSE: What experimental math taught me about my intuition [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 560 other followers

%d bloggers like this: