Quote:
It’s not often that you improve on a bit of math that has been around for 200 years. The Fourier transform was first proposed in 1811 by a Frenchman named Joseph Fourier, though it wasn’t until the middle of the 20th century that he was given the credit he deserved. His technique broke down a complex signal into a number of component signals, which could be transmitted or processed separately and then recombined to produce the original in a fairly nondestructive way.
In 1965 the Fourier transform got a boost as James Cooley and John Tukey discovered a way to apply the transform on the fly using a computer. And now, in 2012, another major improvement has been proposed.
Understanding the Fourier transform isn’t so hard: if you have a piece of music that needs to be transmitted, you can’t send each instrument or frequency separately. So instead, you stack the frequencies on top of each other, and what you get is a single signal, more complicated than any of the single frequencies, but interpretable on the other end. The process of breaking down the complex signal into its component frequencies is achieved by Fourier’s method, and recomposing the original signal from those component frequencies is an inverse Fourier. And it’s not just audio that can be encoded in this way: if you consider pixels to be simply bit values for color and so on, you can express images and video using this method as well. It ends up being rather ubiquitous, actually.
But despite its age and ubiquity, the algorithm is apparently due for another boost, according to researchers at MIT. The digital, “discrete” Fourier transform established in 1965 can apparently be extremely inefficient at times, and the researchers found that for an 8×8 block of values (totaling 64), 57 can be discarded without visibly affecting image quality.
awesome, one really smart guy, in 1811
It’s not often that you improve on a bit of math that has been around for 200 years. The Fourier transform was first proposed in 1811 by a Frenchman named Joseph Fourier, though it wasn’t until the middle of the 20th century that he was given the credit he deserved. His technique broke down a complex signal into a number of component signals, which could be transmitted or processed separately and then recombined to produce the original in a fairly nondestructive way.
In 1965 the Fourier transform got a boost as James Cooley and John Tukey discovered a way to apply the transform on the fly using a computer. And now, in 2012, another major improvement has been proposed.
Understanding the Fourier transform isn’t so hard: if you have a piece of music that needs to be transmitted, you can’t send each instrument or frequency separately. So instead, you stack the frequencies on top of each other, and what you get is a single signal, more complicated than any of the single frequencies, but interpretable on the other end. The process of breaking down the complex signal into its component frequencies is achieved by Fourier’s method, and recomposing the original signal from those component frequencies is an inverse Fourier. And it’s not just audio that can be encoded in this way: if you consider pixels to be simply bit values for color and so on, you can express images and video using this method as well. It ends up being rather ubiquitous, actually.
But despite its age and ubiquity, the algorithm is apparently due for another boost, according to researchers at MIT. The digital, “discrete” Fourier transform established in 1965 can apparently be extremely inefficient at times, and the researchers found that for an 8×8 block of values (totaling 64), 57 can be discarded without visibly affecting image quality.
awesome, one really smart guy, in 1811
Comment