By Jack Ganssle
The Grid Comes to Embedded
Published 12/18/2006
I spent two days at the Green Hills Technical Summit in Santa Barbara, CA last week, and learned a lot. For example, Oprah reportedly has a $40m house in Santa Barbara, which, looking at the town's unbelievable opulence, probably bought her a nice shack.
On the embedded front, Green Hills introduced version 5 of their "Multi" development environment. Of all of the new release's features, the one that most fascinated me is distributed builds, though their addition of a static analyzer ("DoubleCheck") is also compelling.
The company realized that compilations eat valuable programmer time. Press "build" and then, well, wait. Usually the build is takes just enough time to frustrate but not enough to go off and do other, useful, work. So the office is full of developers staring at the ceiling, waiting, and burning expensive payroll dollars.
That hardly seems productive
Yet the computers in any engineering lab are idle most of the time. A 3 GHz CPU, quarter terabyte of disk and a gig or two of RAM are, well, doing little more than downloading the latest OS patches, and waiting for the user to press a key.
Multi's distributed build identifies machines on the network that are more or less idle and parcels compilation tasks to these processors. Each computer completely compiles a single module and returns the object file to the developer's machine. This is rather like SETI@home, LHC@home, and other grid networks where users offer up their computer's free time for the benefit of others. Of course, Green Hills' version does require that each machine has a Multi license.
The company claims typical improvements in compilation speeds of 30 to 80%. Yet when I watched a side-by-side Linux recompilation on single- and four-node clusters it looked like the latter was about 3 times faster than using a single machine.
But how often does one recompile Linux?
During normal development we typically do some testing, uncover a problem, and then change one or two modules. There's probably little advantage to distributed compiles. But in maintenance things change. According to research performed by Eugene Beuchele of Encirq, on average 43% of the code changes from version 1.0 to 2.0. A lot of modules are affected, so speeding up the compilation could have some real benefits.
Waiting for a tool wastes time. We've pretty much maxed out benefits from cranking up workstation clock rates, so it makes sense to spread the compute burden over many CPUs. Intel's multicore strategy appeals. but so does the idea of hijacking idle machines.