The concept is deceptively simple: Slash power use by allowing processing components — like hardware for adding and multiplying numbers — to make a few mistakes. By cleverly managing the probability of errors and limiting which calculations produce errors, the designers have found they can simultaneously cut energy demands and dramatically boost performance.
Let me guess.
Digital devices like computers use electrical pulses to represent bits. These are typically square wave pulses. When the clock tick comes, if the voltage is beyond some threshold (WAG 1.6V), that's a "1", otherwise it's a "0". There will always be a little distortion, and even a perfect square wave at one end of a bit of wire will be a bit blurry by the time it reaches the end. Over a long enough wire the blurriness might drop the voltage at the clock tick below the threshold, and a "1" could turn into a "0". So you want the voltage enough higher than the threshold value and the distances short enough to keep things clean and accurate. And that clock tick needs to be clean and sharp too.
But, the more jumping up and down the voltage does the more energy turns into heat.
You can cut down on the heat by lowering the voltage. If you lower it too much you start to get occasional errors. Tradeoff.
I think you can also cut down on the heat by giving up on square waves. A perfect square wave is a sum of the sine wave with the same frequency and an infinite sum of higher frequency waves--and those higher frequency components have to contribute to the heat. So if you use a blurrier wave you could be more efficient too--but again at the cost of some accuracy.
Sometimes accuracy isn't that critical--you can get "good enough" results, as the article shows. For those kinds of applications, this can be very useful--but don't try it for your orbit calculations.
So. Let me go to Rice and have a look...
One example of the inexact design approach is "pruning," or trimming away some of the rarely used portions of digital circuits on a microchip. Another innovation, "confined voltage scaling," trades some performance gains by taking advantage of improvements in processing speed to further cut power demands.
Ok, I'm not a computing guru. I completely forgot about error correction circuits. And I'm not sure if they mean the same thing by "confined voltage scaling" as I suggested above. I can't find their paper at the conference web site :-(
2 comments:
That's interesting, and sounds a lot like the trade-offs that our own brains use.
Exactly.
Post a Comment