The Future of Embedded Programming

Here's a set of prognostications about programming in the future. Read 'em and weep.

Published in ESP, January, 1991.

By Jack Ganssle

You'll probably receive this issue just as 1991 kicks in. Perhaps even now all commitment to New Year's resolutions has flown out the window (or should I say GUI?), except the inevitable early morning gasp on the 1st of "aaarrgg... no more New Year's parties!". I for one hope to have had (with Embedded's three month ahead deadline, getting a column together is like writing future history) a prediction party, to be followed up in the year 2000 with a rev 2 shindig, where we see just how far off the mark our visions fell.

Looking backwards, in 1981 who would have dreamed that by now CP/M would be dead, that the PDP-11 would become ancient history, or that we would all have 5 MIP color machines that cost only a few thousand dollars? January is a good time to step back, take a deep breath, and reflect on just where we're headed as an industry.

The embedded software industry was invented (more or less) in the 70s. By the end of the decade we had all of the basic tools needed for small projects, from assemblers and linkers to ICEs and ROM programmers. In the 80s C and Source Level Debuggers became common, probably mostly because of the sudden wide availability of a common development platform (the PC). What can we expect for the 90s?

Languages come and go, but the embedded world is still dominated by assembly and C. ADA seems doomed to DOD-only programs, and Forth's popularity is sustained only by a dedicated die-hard fringe element. As one who manages software projects, I find both assembler and C to be totally inadequate. Both are ideal for writing complex real time controllers, and both perfectly suit getting a specific project done. Neither language inherently solves a business's long term problem, that of efficiently getting a lot of different projects out over the course of years.

Though C++ scares me (will the compilers be reliable? can we find programmers who really live and breathe it?), I see it as a step in promoting software packaging. The mantra of the 90s will be "inheritance, encapsulation, polymorphism". Why rewrite the same routines time after time? C++, and a strong armed programming discipline, just might make the concept of reusable packages a reality.

Reusable software is our only hope of dealing with increasing software complexity. Let's face it: at some point our own code will be simply too convoluted for anyone, including ourselves, to understand. It's simply naive to expect that by itself a new language will create some magic ability to handle bigger systems. The solution is to adopt new levels of abstraction.

Why is it so much easier to design an embedded system's hardware than its code? Conventional wisdom holds that the hardware is simpler, that designers just glue a lot of chips together, but software engineers must solve much tougher problems. Balderdash! Sure, programming is hard. But look at relative levels of complexity. Your 30,000 lines of code might be running in a system with over 100 million transistors. Which is more complicated? Which would you expect to be harder to design?

In fact, the hardware community is much better than we are at defining, and living with, levels of abstraction. No one designs a computer for an embedded system. We use an off the shelf processor. Need some memory? Pick one of thousands of standard chips. I/O is just as simple - hundreds of different kinds of peripherals lie at our fingertips, with yet more hundreds of variations of each type. Any one chip might be supremely complex, but each is easy to use.

Programmers decompose a problem into its functional blocks and then laboriously code each block. The hardware engineer also partitions the problem into functional blocks. Then, he uses a few standard parts to implement that section.

Suppose there is no chip that does just exactly what is needed. In very simple cases the engineer will design a custom solution using a bit of glue logic. Otherwise, he'll modify the design until it fits some combination of standard parts.

Can you imagine designing even a simple PC-XT out of transistors? Sure, it could be done. If your hardware counterpart starting designing this way tomorrow you'd be the first to call the men in the white coats. Yet, in effect this is how we write software!

The problem of hardware complexity has been mastered by standard parts, and (perhaps more importantly) a will to use them even if the design must be altered to fit that which is available. I predict that by the end of the century software will be in a similar position.

No matter what the level of complexity, a chip comes with a well defined interfaced bounded by some number of pins. Voluminous data books describe the action of the device in gory detail. While the unit's internal operation might be glossed over, its interface is well defined with timing diagrams, input and output levels, and the like. Doesn't this sound a bit like the OOP nirvana we're all seeking?

I have a vision of software counterparts to the huge data books supplied by the chip vendors. Need a fast Gaussian regression? Ah yes - ABC Software specializes in numerical methods. Pull down their data book and look at the choices. Of course they'll sell versions for different CPUs, ones that handle huge arrays, others that work in small address spaces. Perhaps graphs will show performance versus the size of the data matrix. Where chip manuals describe the package type, our software data books will list object module formats supplied: OMF, COFF, or the like. Perhaps function prototypes or object definitions will show its interface.

Above all, the innards will remain more or less hidden. We're selecting a software component based only on its function.

Just think of how much easier programming will be. You'll never again re-invent the wheel. We'll tie standard components together. Perhaps someday a sort of universal software bus will even become common, letting us just hang modules together in some common way, rather as peripherals now drop onto more or less standard address, data, and control busses.

It seems like a dream, yet something of this sort simply must come to pass. The motivating factor will be money. Software will be priced like integrated circuits. You won't spend $595 for an unlimited runtime license to a window manager; you'll buy it a copy at a time, for $1.34 each. Maybe there will even be a grey market in code components. "Psst, buddy. Ya wanna floating point package cheap?"

Like the hardware, faster versions will have a premium price. A 27 microsecond context switcher will cost half that of one rated at 18 microseconds.

This does imply that we'll need a solution to the piracy problem. routine will not be duplicated for free. After all, they are counting on volume sales just like the chip vendors. Perhaps (gasp) we'll have some sort of software police. Perhaps an outbreak of honesty will sweep the nation. Perhaps world peace and the solution to hunger is at hand...

Software companies will take a cue from the history of marketing and develop distribution channels. A gaggle of software Willy Lomans will be knocking on your door, pushing their latest interrupt driven UART routine ("it's 200 bytes smaller than the competition's!"). Second sources will give your company the assurance that even if FlyByNite Inc folds, the code your product so desperately needs will still be available.

Help!

Today's on-line help systems are the pits. During the 90s we'll see dramatic improvements. Decent GUI user interfaces with tremendously improved help features might spell the end of conventional users' manuals (which no one reads, anyway). Microsoft is starting to probe the future with the Programmer's Bookshelf, a CD ROM with all of their manuals digitally encoded. Windows 3 already includes Hypertext help which makes getting access to information quick, painless and fun.

It's time we saw a higher level of help integration in the development arena. Let's face it - current help systems only aid us in using the tools, a small part of the development battle. I believe we'll see a link between our own source code files and the integrated compiler/linker/editor's help system.

In a real embedded system we probably wrote hundreds or thousands of functions. Who can remember what each one does, or how parameters are passed? Future environments will link each of your functions into a hypertext help window, with the function's comment block instantly accessible. Suppose you need a routine to write a byte to a UART; click on the help window, select the I/O category, and then click on output_byte_uart to see just how the function should be invoked. While debugging, the same level of integration will make it easy to see if you called the function correctly.

We'll have to change our coding styles a bit, embedding hypertext identifiers into the source modules to establish help links. This is all to the good, and could lead to programs that extract the links to automatically document the code.

Recently I talked to a programmer linking 800 different source files into a single program. With this many modules, how can you get around DOS's 8 character file names to give meaningful identifiers to each file? The help links might provide the editor a path around conventional file names. You would just click on the function's description in the help window, and the editor will locate and load it for you. Who really cares what file it's in?

Project Management

The boom times of the 80s are behind us. Layoffs are accelerating. In the 90s managers will want to get every last bit of productivity out of their programmers. Will programming output be measured electronically, much as computers monitor how many information requests a telephone operator handles per hour? It's a scary thought, though I wonder if a scientific application of this could teach us something about programming methods.

Managers hate to see workers thinking. It looks so unproductive, and is too hard to measure. Code generation looks like useful work and is easier to justify. I do think we will start to see more of a demand on the part of managers to measure output in a meaningful fashion. What form will this take?

Big programming teams don't seem particularly efficient. In The Mythical Man Month the author proposed abolishing team programming; rather, he wants to partition problems to a scale where substantial portions can be handled by a single guru backed up by a handful of support people. After all, being a particularly adept programmer doesn't imply any sort of ability or interest in documentation, test, and the like.

These sorts of teams will become more common in the 90s. The complexity of software will be partially managed by removing clerical functions from the backs of the programming team. The previously mentioned hypertext help, which might be quite tedious to support, will be offloaded onto a para-programmer.

Other para-programmers will produce the documentation needed for software data books, and will answer customers' questions. Programming might become more of a profession where an apprenticeship is served, sort of like an intern in a teaching hospital. After all, "inheritance" does not apply only to the code. Expertise and skills can be taught only be working with a trained professional.

Finally, the tenants of Peopleware (by DeMarco and Lister) will be heeded: give programmers quiet time to think! In a decade of studying programmer productivity, the authors found that an environment with quiet offices (no cubicles allowed, please) and few disturbances, programmers are almost three times more productive.

Speed

I hate waiting for the computer. My 33 Mhz 386 with almost every conceivable bell and whistle still eats up too much of my time during big compiles.

Compute power keeps getting cheaper. Unfortunately, each new increment in raw horsepower is eaten up by more complex software. The new compilers, for example, can optimize the last microsecond out of code. The price is exceedingly long compile and link times. I was somewhat appalled by the 10 to 20 minute compile times listed in Larry Mittag's November review of 68000 compilers. How many bugs can you fix per day if the tools eat up this much time?

Microsoft has two versions of their C compiler. One does few optimizations but is fast, speeding development. The other is agonizingly slow, but does generate the best possible code. They also have an incremental linker which wastes memory, but greatly decreases link times.

Programmers need fast tools. I'd like to see compile/link/download cycles shortened to a few seconds. This sounds impossible, but for any one bug fix we usually only change one or two functions. If the modules are small, the compiler will really have little to do. The linker and downloader will be the bottlenecks.

If you're willing to waste some compiled code size and speed, extremely fast compile/link/download cycles will be possible. If you need every last ounce of performance, then the future holds little promise for instant compilation. Try to design code that will work in a downgraded environment, unless you like spending weekends at the office.

Spaghetti Reducers

Someday artificial intelligence will finally come into its own. One place we could really use its help is in optimizing IF/THEN/ELSE constructs. Most programs start out fairly well structured. As bugs are found and features grafted on, IFs and ELSEs are added until no human really has a good idea how data flows through a function. Prettyprinting helps, but does not reduce the complexity of 15 nested IF statements.

Semantic analyzers which can extract the meaning of a program will be sold as LINT-like utilities. They'll be the counterpart of the boolean equation simplifiers now used by hardware engineers to design with PALs and PLDs. The semantic analyzer will run in either an optimized mode, which will reduce the code needed to implement a function (rather as a Karnough map does for logic equations), or a human readable mode, which simplifies the logic permitting easy understanding of the code.

Conclusion

Who can say what the 90's will bring? As consumers of development tools and environments, we are in a position to demand that our suppliers offer more than yet another 8051 command-line compiler or configurable editor. Project into the future and dream a little... then start a company to capitalize on your idea!