I’ve been mucking about with a Linux desktop again, and doing electrical power measurements to figure out how efficient it is. Most home users probably aren’t thinking about this, as the difference between 100W and 200W is inconsequential to them. But I’m curious about processing capacity per unit power, or perhaps processing capacity per CPU core. When you consider that it takes about 1 pound of coal to produce a kilowatt hour of electricity (equivalent to running a computer using 100W, for 10 hours), the difference is no longer inconsequential over even normal periods of operating time.
At the moment, my usage pattern bounces between two systems: a Macbook Pro from 2009, and a Dell desktop from 2011.
The Macbook Pro has an Intel Core 2 Duo P8400 processor, which according to this performs at an abstract level of 1484. That works out to a performance level of 742 per processor core. It does feel slower using this system, when I’m developing and compiling software, but then it uses half the power of the bigger system (100W).
The Dell desktop has an AMD Phenom II X6 1055T Processor, which according to this performs at an abstract level of 5059. This works out to a performance level of 843 per processor core. The system uses 250W overall, to run everything.
But let’s say I’ve been thinking about buying a new Macbook Pro with Retina Display. The late-2013 model uses an Intel Core i5-4258U processor, which according to this performs at an abstract level of 4042, which works out to a performance level of 2021 per processor core. If its processor cores are 2.5 times the performance of my current Macbook Pro, and at least twice the speed of the Dell desktop, there’s a good chance that for many single-threaded apps the overall experience of using the device would be better anyway. And let’s face it, most of the time the user-interface is running on a single thread anyway. If the system also only draws 100W at idle (likely less, given the improvement in process technologies), then it offers almost the same amount of performance at half the energy consumption, which is a huge win.
The trouble with all existing processors is the fact that they can’t completely shut off processor cores when they aren’t needed. If 99% of the time, I’m idle at the computer, and it’s able to handily process everything I’m doing, then the power used in running extra cores all of the time even at the lowest C-state seems like a terrible waste.
Power Hungry GPUs
One other thing that struck me as a bit odd is the fact that when I hook up a second monitor to the desktop, the power utilization measured at the wall jumps from 128W (idle) to 200W (idle). Powering each monitor uses about 20W, so I can only assume that the graphics card is chewing up the 50W difference, but I don’t understand how the GPU architecture can be so power hungry or the drivers can be so poor. It doesn’t make sense to me that the difference between driving one monitor and two is a 60% increase in total power consumption.
In a nutshell, this desktop system is burning 2 pounds of coal every 10 hours, which seems a bit much since it spends 99% of its time idling.