Briefly: the problem is to reconstruct an image (so let’s say a function f in L^2(R^3) measuring density, or potential, or whatever) from a bunch of linear 2d projections. This is what you get when you try to do cryo-EM on molecules of biological origin; you have no control of how the molecules are oriented, so when you pass a beam through them and record the shape of the “shadow” you get, you’re looking at a projection of the molecule in an unknown direction. But of course you may have 10,000 of the molecules, so you have 10,000 of these (noisy) projections at a bunch of different angles, and one may hope that this is enough information to reconstruct the original 3d shape!
Suppose f’ is one of these projections. If F is the Fourier transform of f, and F’ the Fourier transform of f’, then F’ is just the restriction of F to some plane through the origin.
So we have the following problem: there’s a mystery function F on R^3, and we have a bunch of functions F’_1, .. F’_n on R^2 which are restrictions of F to planes, but we don’t know which planes. How to reconstruct those planes, and F?
Let G = SO(3). We can set this problem up as an optimization problem over G^n as follows. We want to find F and g_1, … g_n in G such that F’_i matches the restriction to the xy-plane of g_i F.
But optimizing over G^n is hard — an essentially exponential problem. So what Singer and his collaborators do is discretize and convexize the problem in a very clever way. You put a net of L points on S^2; then every rotation is going to induce (after some rounding) a permutation of these points, i.e. an LxL permutation matrix. What’s more, n rotations just give you n permutations, which is an L x Ln matrix.
But optimizing over permutation matrices has a nice convex relaxation; the convex hull of the permutation matrices is the polytope of doubly stochastic matrices, which is very manageable. It gets better: a general permutation of the witness points on the sphere, of course, looks nothing at all like a rotation. But the fact that a rotation preserves distances means that the corresponding permutation approximately commutes with the LxL covariance matrix of the points; this is a linear condition on the permutation matrices, so we end up optimizing over a linear slice of the doubly stochastic matrices. And the point is that the difficulty now scales polynomially in n instead of exponentially. Very nice! (Of course, you also have to show that the cost function you’re actually optimizing can be made linear in this setup.)
Idle question: how essential is the discretization? In other words, is there a way to optimize directly over the convex hull of SO(3)^n, an object I know people like Bernd Sturmfels like to think about?