calit2

Power vs. Performance: A Win-Win Approach

Irvine, CA, July 19th, 2012 - - Over the course of the last several decades, computer engineers have loaded more and more transistors onto the chips that form the backbone of electronic devices. This has led to faster data processing and greatly enhanced memory and computing capabilities.

It also has made the devices more power-hungry. Energy consumption and the resultant heat dissipation is a challenge for computer scientists and engineers, who continually try to balance a remedy with the demand for better and faster performance.

Computer science professor Alex Nicolau, the third speaker in the SURF-IT lunchtime summer seminar series, said this week that the excess heat dispelled by computers is an increasing problem for two reasons. “If you’ve ever watched a movie on your laptop, you know they tend to heat up pretty badly,” he said, adding that the power dissipation per area of a chip approaches that of a nuclear reactor. “That’s mind-boggling and clearly not tenable in the long run,” he stated to a clearly surprised audience.

Alex Nicolau

The other problem affects large data centers that often contain thousands of computers. In addition to the cost of the electricity necessary to run the machines, cool air must be kept circulating properly so the machines don’t overheat. “You wind up with situations where some of those computers are working more intensely than others, so parts of the room are heating more than [other parts],” Nicolau said. “Getting extra air circulation just to those parts of the room is impossible.”

More power-efficient hardware is one solution but the pace of technology can’t compete with the constant demands for improved performance.

Computer scientists are turning to software. They can write code that can sense when a processor is heating up and quickly move the computing task from one machine to another or from one core to another. Software can be developed faster than hardware and is less expensive. In addition, it is independent of the machine it is running on so it can improve any computer.

The trick, according to Nicolau, is to optimize the tradeoff between performance and power, which happens to be the focus of the SURF-IT research project he is mentoring.

A lot of power is required to move data from memory into the processing unit, where computation takes place, and then back into memory. In a typical program, this process can occur many times. Nicolau’s project involves optimizing the computer code so that the data is moved less frequently, saving time and improving both performance and power usage.

But because these code transformations affect programs differently (they may increase performance or power efficiency to a greater or lesser degree, depending on the program) computer scientists need tools that will allow them to analyze the efficiency of the coding in a consistent way.

By collecting measurements and identifying similarities between programs, Nicolau said, optimized code that works on one program can be applied to those that are similar.

It’s a process that’s more easily said than done, however.

Collecting measurements is just the beginning. “You have to know at what point in the program they are occurring, both in terms of power and performance. And performance is divided into a bunch of measurements,” he said.

All of those metrics must be aggregated in a meaningful way.

“As humans, we are pretty good at dealing with complex tradeoffs. In an automatic system, it’s very hard because you need a model that includes all the possible ways of trading off one against the other,” he told the audience.

“But suppose you have some programs that you’ve optimized by hand. You know the sequence of steps you took to get to this.”

By establishing levels of similarity between those programs and others, the optimized transformations that are relevant can be applied automatically to the new programs, increasing efficiency and reducing power usage.

“This would be a big step forward,” Nicolau concluded.

Related Links

SURF-IT