Compressed sensing in Wired

My new Wired piece, about compressed sensing, is now online. For a more technical but still gentle introduction to the subject, see Terry’s blog post.

Update: Igor at Nuit Blanche has a great post clarifying what kind of imaging problems are, and aren’t, currently susceptible to CS methods.

Tagged , , ,

9 thoughts on “Compressed sensing in Wired

  1. Richard says:

    This is a very good popular press mathematics article, written in such a way that technical details are folded into links that technically minded people can dip into and others can easily ignore. Wrapping it in a real life story was icing on the cake. Excellent!

    The restoration of the picture of Obama was quite impressive, particularly in the way the folds of his skin and the pin on his jacket were recovered from almost total obscurity in the noise, and some fairly realistic hair appeared on his head. I think I can barely make out some artifacts in the final image, and I wish I could see much higher resolution versions of these pictures to examine the artifacts and what that hair really looks like.

    I’m still a little skeptical about applications of these ideas to serious photography, and I would love to see a “decompression” of an hugely under-sampled 24 – 40 megapixel image (high end of digital SLRs or scanned 35 mm film respectively) of a landscape with lots of fine texture and wide dynamic range light, animal hair, and human skin. High quality lenses, digital sensors, and film can render images with impressive “optical depth” (a quality that is difficult to define but obvious when you see it). To what extent is that degraded in this process? Can extra hairs or spruce needles that were not there in reality find their way into the decompressed image? Will human skin look plastic-like? (Some current digital cameras actually have this tendency.) If I want to open a very high resolution image in my photo editor, how long would it take to render the image on the screen? It’s annoying that high resolution images take up so much disk space, but it would be even more annoying waiting a long time for an image to be decompressed. Do the types of artifacts that appear from decompression become magnified when you start manipulating contrast, saturation, and levels in individual color channels? And then what happens when an image is repeatedly decompressed (opened), edited, and re-compressed (saved)? Does anyone know any of this yet?

    Audiophiles have strong opinions about compressed music — I won’t even go there.

  2. Tom says:

    Incredible. This is going to have insane applications

  3. Richard Wang says:

    I really enjoyed this article. A good explanation for the lay audience.

  4. Lon Davis says:

    This kind of just coming over the horizon conception is what I live for in Wired magazine. I may not be able to follow all the mathematical logic that the algorithm implies, but the implications of the procedure as a intellectual tool certainly are enticing from your article. I have sent the link to the on-line version and your blog to my son who is more mathematically equipped to read the more technical material. Thanks for sending us the first iteration of the unfolding pattern this innovation will stir; it is for our minds to fill in the larger image of its innovative arc and how in so doing to infer the sparsity trajectory as it unfolds into the noise of our culture, technology and historical moment.

  5. Whoops!

    I sent my previous message while swilling beer and not previewing my message.

    Try this post instead:

    As an audiophile, I’m exited with compressed sensing’s potential. Music CD standards were set in granite during prehistoric computer times. Current programs upsampling ditigal music merely make educated guesses adding finer dithering. Compressed sensing ought make a CD’s sound surpass vinyl. Including algorithms responsive to individual instruments could place Django Reinhart in your listening room from a scratchy 78.

    PLEASE KEEP ME POSTED ON AUDIO APPLICATIONS.

  6. Jon Hallbright says:

    I suspect that the picture of Barack Obama was artificially sparsified before reconstruction to achieve these results. This means that they took the original image, zeroed out many of the smaller transform coefficients, and then tried to reconstruct it. Of course, this is cheating (since you need to have the entire original image first in order to sparsify it), but the results shown do not match experiments with a true image. The problem is that in the Fourier domain (which is probably what they used here), images are not sparse (perhaps compressible but not sparse).

    If this is not indeed the case, maybe you can post more details here on how they produced that result because it looks really good.

  7. Stephen says:

    The images of the wired article are now missing! Do you have a link to a version that still has correct images? It was a nice article

  8. Stephen says:

    I found a good link to the article with images: http://archive.wired.com/magazine/2010/02/ff_algorithm/all/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 580 other followers

%d bloggers like this: