Open Source?
Published in ESP, December 2000
By Jack Ganssle
We make free software affordable.
There's a mountain of wisdom buried in this tag line from Cygnus (now part of Red Hat), with more than a dollop of cynicism to boot. It comes to mind every time a wildly enthusiastic open source advocate tells me how free code, or GPL'd code, or any other variant of the "no charge" software thrust will save the universe.
Unhappily, the open source phenomena is clouded by too much passion. Richard Stallman - truly a passionate evangelist - is no doubt the visionary and prime mover behind the free software movement (www.fsf.org), which, in my opinion, confuses the issues behind open source. His view of "free software" is quite at odds with what most people believe when discussing open source code.
Quotes like "As a computer user today, you may find yourself using a proprietary program. If your friend asks to make a copy, it would be wrong to refuse. Cooperation is more important than copyright" (http://www.gnu.org/philosophy/why-free.html) are naive and simplistic. I do agree that copyright issues in the digital age are hugely problematical, and are something we've just got to try to resolve. But advocating theft seems ethically wrong as well as self defeating. Piracy in the name of freeing the software is akin to "liberating" TV sets in times of civil unrest.
In the last decade we've seen several entirely new business models predicated on free code. Cygnus and Red Hat, for example, created brilliant companies selling adjuncts like support and consulting to the freebies. Netscape used a free browser to gain tremendous market share. Stallman claims success in selling his consulting services to maintain free code. This movement has created a viable business model, but it's far from the only way of running a company.
There is a place for proprietary code, one that I can't see ever disappearing. Arguing that companies must adopt new business strategies to deal with a philosophy of free code makes for great beery debates! but is largely ineffective. Businesses do, and must, adopt models that allow for profits (in a capitalistic economy). The software is a means to that end.
Embedded applications will almost certainly be a bastion of proprietary software. Few CEOs, just having spent millions developing a product that gives a competitive advantage, will then release all of that hard-won intellectual capital to the public domain. When it's code very specific to a particular domain - say, a new kind of coke machine - few people would even be interested in the firmware. If intellectual capital is truly an asset, it's one that managers will want to protect; often with good reason.
Value Vs Cost
Yet, while companies circle the wagons to protect their rights to proprietary code, somehow none manage to put a value on their intellectual capital. Even Microsoft, whose real net worth is clearly in their software base, doesn't have an entry on their balance sheet for the value of that code. They do count desks and buildings, things of no material importance to making the company what it is.
Until the SEC and accounting groups find a way to recognize intellectual property as a component of shareholders' equity too many CEOs will continue to feel software is a necessary evil. They won't understand the cost of quality, and will be too ready to say "it's just a software change."
The price of software is not its value. Quicken costs less than $100, but is worth far more to Intuit! and to me. It's the center of my personal financial planning, of day to day expense tracking, and to understanding my own financial position. Ditto for most applications I've bought. Millions of us amortize the real value of the product. Without Quicken, or the other apps we use daily, I'd suffer more than that $100 of loss.
Mechanical designers balk at the thousands of dollars CAD programs cost, yet pay the price since they spend most of their lives working with AutoCAD or similar products. The cost is high, but the value higher. The business transactions between the designer and CAD vendor, which range from collecting money and shipping a product to post-customer support, feature enhancements, and host ports, span a range of operations much richer than just "cranking code", offering a high value-proposition for customers.
Software, especially firmware, is the most expensive thing in the universe. Ex-Lockheed CEO Norman Augustine, in his wonderful book "Augustine's Laws" (American Institute of Aeronautics; ISBN: 1563472406) talks about how defense contractors were in a bind in the late 70s. They found themselves unable to add anything to fighter aircraft, because increasing a plane's weight means decreasing performance. Business requirements - increasing profits - meant that regardless of the physics of flight they had to add features. Desperate for something hideously expensive yet weightless, they found firmware. Today the firmware costs of high performance aircraft runs about half the total plane price. A success by any standard, except perhaps the taxpayers.
Software is the most expensive thing in the universe. This mantra teaches us several things. First, costly things have a value to their producer. Just as books are protected by copyright laws, it's reasonable to protect proprietary code. Why engage in the effort and cost to produce something of value, when that value will be denied to the creator. Just as patents give a competitive edge to inventors, keeping some code proprietary will, in many business models, give a leg up to the creators.
The mantra also teaches us that writing code is the last thing we should ever do. It's cheaper to reuse code. It's smarter to buy, beg, or borrow a module than to code it ourselves. And here's where I'm of two minds about the philosophy of free software, or open source, or any other similar model. As stated, I have no objection to companies keeping their assets private. But we're tumbling into a software crisis whose dimensions few realize. Code costs too much, and is all too often of such low quality that it's a miracle anything works. As the old saying goes, if architects made buildings the way programmers create code, the first woodpecker that came along would destroy civilization.
Free software, or at least open source, holds some promise of reducing costs and improving quality. Noble goals indeed. In my opinion, reuse of some sort is the only hope we have to grapple with quality and cost issues. Unhappily the dream of the software IC seems dead, doomed by lack of interest, pricing problems, and quality concerns.
In the embedded domain, quality is probably the major reason why developers don't reuse code. When I ask developers why, for instance, they continue to write their own RTOSes instead of buying one of the 100 or so commercial products, the first concern is "what if there's a bug? If I write my own I can debug it, even at 3 AM." Surely these are valid concerns, and clearly the vendors haven't adequately wrestled with this issue.
Several open source RTOSes are available. Still developers write their own; still I hear the same complaint, now modified to "well, if I write it myself at least I'll understand it when problems arise." Why is this? Is the old bugaboo, Not Invented Here, the root cause behind the failure of developer to institutionalize reuse?
Once, in a fit of despair, in trying to reach some understanding of these issues, I downloaded Jean LaBrosse's uC/OS (www.micrium.com) and ran it through a line counting program. 4000 lines of C. Not so much, really. 4000 lines. I could write that in, um, well, pretty quick I bet.
Maybe not. Commercial firmware typically costs around $15 to $30 per line. That 4000 lines might cost $100k to design, code, and test, a lot of money by any standard.
So, I'm cynical about the success of reuse since of the tiny amount of purchasable code that exists (like RTOSes) 50-70% of users chose to write their own.
Perhaps, though, the relatively recent focus on open source can thwart people's determination to write as much code in-house as possible. One sign of hope: people ask where they can get a TCP/IP stack now, instead of immediately planning to write their own. I see folks ripping stacks out of Linux, for example. Perhaps this is a harbinger of future success.
Open Source Quality?
At September's Embedded Systems Conference panelists discussed the pros and cons of open source versus proprietary software. As always, each side had powerful arguments that leave me convinced there's plenty of room in the market for any sort of licensing scheme. But a question from the audience caught my attention. "How can I trust your open source code in my safety critical application when anyone can make changes?" This is quite an interesting twist from the paradigm that lots of eyes looking at the code equals high quality. The panelists returned at best ambiguous answers, nothing that gave me a feeling of security.
Having looked at a fair amount of the open source software I'm struck by the wide diversity of coding styles, commenting standards (or lack thereof), and general approach to writing software. Much of the source is frankly terrible, even if it works. Some is beautiful. Given such a range how does one ensure quality? Testing, while always a vital part of any qualification process, is not by itself enough to guarantee the stuff works. Quality starts with a good spec and design, is enforced by rigid standards and inspections, and confirmed by rigorous tests that have been as carefully designed as the software itself. Are we sure that the latest patch to, say, Linux meets all of those criteria?
Puzzled by this intriguing puzzle I asked a Linux vendor the same question. His answer was as reasonable as it was surprising (surprising since the show floor was flooded with companies promoting Linux as the solution to all embedded needs, from internet-aware toasters to smart picture frames). "Forget Linux for safety critical applications," he said, "it's not certified to any standard, and is so complex it probably never will be." Not being a Linux guru I have no idea how accurate the statement is, but coming from a vendor it sounds a note of alarm for high-rel applications.
The obvious moral is that we need different tools for different needs. I do wonder, though, if this describes an innate problem in creating quality code using the open source model.
If there's one thing we need in this industry, it's a resolve to do things in a disciplined manner, all of the time. That means many things, including adopting standard ways to write code. Chaos stems from hundreds of independent developers, all conforming to their own ad hoc (or non-existent) styles, standards, procedures and methods. Although it is possible to write C code which is understandable, it is not common practice to use a verbose, understandable style. C provides cryptic shortcuts that run counter to clarity, and they are commonly used
The open source world will greatly benefit when there's a coding czar that demands all submitted work products meets certain standards, and is accompanied by design docs and test plans/code. A respected "keeper of the code quality" could indeed ensure that submissions are as close to perfect as we know how to make. Of course, someone has to pay this person's or the organization's wages.
To be fair, I've found that almost universally people building proprietary software suffer from all of the same problems. Virtually none use a disciplined development process. Firmware standards, when they do exist, are always ignored. Test plans tend to materialize in a last ditch bit of heroics, not as a result of up-front well thought out design.
I'm constantly astonished that so few firmware folks know of the various approaches the software community has invented to deal with building complex systems accurately. One example is the Capability Maturity Model (www.sei.cmu.edu). This model is notoriously difficult and expensive to implement. But it contains much wisdom that all of us should be studying.
Conclusion
One pro-proprietary panelist mentioned that he does indeed often use the GNU tools when building a new compiler or similar utility. He can use pieces of the open source code, say a code generator, to get his own product up quickly. Then, he writes a replacement for the chunks ported from GNU till the entire product is proprietary.
But does this mean that the GNU Public License "infects" proprietary code? If the application is 50% GNU and 50% private, I presume that means the app must follow the rules of the GPL. It must be released as an open source product. As coding progresses and GNU is eventually removed entirely, is the entire code base now open source?
Taking this question to an extreme, how can you be sure that code, ported from some other application, hasn't been infected? Do we need a clean room mentality? If so, reuse is doomed.
Open source does indeed allow a lot of smart folks to find and fix problems. It lets a user understand what's going on in much more detail than possible using a black box component, and gives us the confidence that in the worst case at least we can do our own support.
But just like proprietary software, embedded folks can find themselves trapped with an obsolete version of an open source product (since embedded systems seem to live forever, requiring maintenance for decades). There's no guarantee that the code is any good, nor that any smart eyes have taken the trouble to examine it.
Open source clearly has benefits for users, and for those companies that find a way to exploit it. Proprietary code makes sense for vendors, but in some cases also for users (keeping known good safety critical code secure comes to mind).
I can't help wonder, though, the oligopolies can't get all of the benefits of open source with no downside to their business. Perhaps in the Microsoft anti-trust case the remedy should be to place Windows under one of the open source models. Microsoft can still make a ton of money from it, but users will benefit as thousands - millions - of eyes look for bugs.