By Jack Ganssle
An Interview With James Grenning
The agile community's alphabet SOUP of acronyms (whoops, SOUP stands for "Software of Unknown Pedigree") include XP, TDD, FDD and many more. TDD, for Test-Driven Development, seems to be getting much more exposure in the embedded arena than most of the others. There's much to like about it, but I find some aspects of TDD unnerving.
TDD inverts the usual paradigm of software engineering. Its adherents advocate writing the tests first, followed by the application code that satisfies the tests. That's appealing; when testing is left to the end of the project, which is invariably running behind schedule, the validation gets dropped or shortchanged. Early testing also insures the system is being built correctly: errors tend not to require massive restructuring of the system.
Further, TDD is implemented in very short cycles. Write a test, write some code, pass the test, and move on. As in eXtreme Programming, the assumption is that code changes are low-risk since the tests are so comprehensive and fast to run.
The tests must run automatically; programmer intervention slows the process down. The suite returns a series of pass/fail indicators.
Kent Beck extolled the virtues of TDD in his 2003 tome "Test-Driven Development, by Example" (Pearson Education, Boston, MA). I found the book shocking: the examples paint TDD as rampant hacking. Need a function that converts between dollars and Francs? Hard code the conversion rate. That will pass the tests. Then, over the course of far too many iterations, generalize the code to be the sort of stuff that, well, most of us would write from the outset.
Is TDD just hacking?
In my travels I often see companies use "agile" as a cover for a completely undisciplined development process. Agile methods, like all processes, fail in the absence of rigor. So, no, TDD is not about hacking.
But I take issue with some of the ideas behind TDD, and think that it needs to be modified for firmware projects. I invited James Grenning (https://www.wingman-sw.com/, formerly at http://www.renaissancesoftware.net), a signer of the Agile Manifesto and well-known speaker and trainer on agile methods, to a discussion in print. He kindly agreed to be interviewed about TDD.
Jack: James, I have three reservations about TDD: in my view it deprecates the importance of design (and requirements gathering), and, though the focus on testing is fantastic, it seems test is used to the exclusion of everything else. Finally, I think TDD raises some important business concerns. Let's look at design first.
TDD newbies usually think that test-driven development is all about using tests to crank code, while the experts claim it's primarily about design. Yet the activities are all coding. How can TDD be about design?
James: Jack, thanks for the opportunity to have this dialog. I'll see if I can straighten you out on some of your misunderstandings.
I'll make several points about how TDD is really about design. Modularity and loose coupling are hallmarks of good design, and needed for TDD. There should be appropriate up front work. The code is the design. Design is not a phase, but a continuous process. TDD reveals design problems before code rot is irrecoverable.
Let's start with modularity and loose coupling: You cannot unit test a chunk of code if it cannot be isolated and placed in a test harness. To do this you need modularity and loose coupling. TDD leads to it. Testability is confronted daily, keeping modularity and loose coupling (a.k.a. good design) throughout the life of the product.
Then there's doing appropriate up front work: Different situations and people require differing amounts of up front design. Before a development effort commences, a team should spend some time exploring design alternatives. This tends not to be formally documented in smaller teams. Also any documentation is usually kept high level, so that it is not mired in the detail that causes a lot of churn.
I am not saying you cannot do documentation. I am just saying we don't have to do it up front while things are in flux. If formal documents are needed, we'd create them after getting the ideas to work.
An important part of being successful with TDD is having an architectural vision that guides the team in partitioning the software. The vision also evolves. Like many practicing TDD, I am schooled in the Robert Martin SOLID design principles. Essentially, the SOLID principles provide guidance that help produce modular and loosely coupled code.
In general, test-driven developers apply these principles to their design vision and daily work. One key idea is that modularity at a small scale supports good architecture.
We need to give coding its due respect. The code specifies the behavior of the executable program in all its necessary detail. Take a look at Mr. Reeves' paper. It may change how you think about code. http://www.developerdotstar.com/mag/articles/PDF/DevDotStar_Reeves_CodeAsDesign.pdf
We believe in continuous design: TDD is a continuous design activity. Design is never done; it is not a phase. As the design evolves, new modules are identified. The initial test cases for the new module are used to explore the interface of the module. The test is the module's first user. This really helps develop clean and easy to use interfaces.
The alternative, publishing an interface spec, often results in awkward interfaces. TDD, with its code centric and interface first approach, excels. As we concurrently develop code and tests during development, we might find that the design does not cleanly handle some new requirement. In TDD we have a complete set of automated tests. These tests weave a safety net that takes a lot of the risk out of refactoring code.
In a situation where there are no automated tests, the developer might shoehorn in the new functionality. It's just a small change after all, but it starts the code rot process that has led to many of your readers' legacy code messes. How does a design rot? One line at a time.
Tests are a design spec: As TDD proceeds, the detailed design requirements are captured in the test cases. This is a very powerful form of documentation. It does not degrade into lies like many other forms of documentation do over time. Once developers become familiar with this form of tests as documentation, they find if much more useful than prose. It also is the first place to look when considering using a module or understanding it so that a change can be made.
Are you aware of code rot radar? TDD acts as an early warning system for design problems. Imagine you can see in the production code where a change is needed for a new feature or bug fix. But you also find that you cannot devise a test that covers this change. This is an early warning of a design problem. The code is telling you, before it is too late, that code rot is starting.
TDD gives warnings of functions that are too long, too deeply nested, or are taking on too many responsibilities. It is no surprise that if you listen to what the code is saying, code rot can be prevented by making needed design improvements as they become necessary.
Like I said a bit ago, the tests form a safety net to reduce the risk of restructuring working code. Refactoring is easy when only a small part of the code is rotten. TDD in the hands of a knowledgeable engineer can really help keep code clean.
Jack: The agile community says we often don't know what we want, so we wind up building the wrong things. That observation drives a lot of the agile practices. My experience with embedded systems is that we generally have a reasonably clear idea of the end-product, though some features and UI aspects might be fuzzy at first. But most embedded apps have no real UI, or a UI that is fixed as it consists of some LEDs and switches or other clearly-defined I/O.
I have observed that a large part of the fuzziness stems from poor requirements elicitation. Often that's because doing requirements analysis is no fun. TDD seems to favor downplaying the requirements phase. Instead, why don't we devote more effort to it?
James: The situation you mention is well suited for incremental product development; some things are clear, some are fuzzy. In the last section I mentioned that design is a continuous process. Should requirements gathering and elaboration be continuous as well? When are the requirements really known? When are they done? The common answer I get to that question is "the day the product ships". Statements like that tell me that teams following a waterfall approach fall back to an evolutionary approach when things get tough.
In Agile we consider that product development engineering and requirements engineering can proceed in parallel. Why delay development when there are important requirements, already known, that we could start developing? I have not seen a good reason for delay.
Waiting has a definite downside. Serializing requirements and development adds unnecessary delay to the overall product delivery timeline. You won't be able to get the time back that was wasted in delaying the start. Also, consider the time spent in a serial process, detailing out requirements only to cut them when you discover the date is in jeopardy. It's another waste you can avoid.
In agile, we strive to get good at evolving requirements and code concurrently. We want to get the product to market sooner. TDD says nothing about requirements collection. It is an incremental approach to development, which happens to work very well with incremental requirements gathering.
We also accept that it is likely that we will get some of the requirements wrong. This is the case for up-front requirements too. Development is invention; there is uncertainty. Building the product helps us figure out what we know and don't know, far better than more noodling over the elusive complete requirements.
One common concern I often hear is that some missing requirements will invalidate all our work. If we are completely off target with our product vision, sure this can happen. But I think it is unlikely unless the vision delta is as wide as the delta between building a cell phone when we needed a washing machine.
Requirements surprises are probably a greater risk for non-TDD projects. The TDD code base will be better able to handle a requirements mistake because of its resulting well-structured, modular code with automated tests.
One more thing on requirements, if you plan on staying in business, you better hope for new requirements; they are the opportunities that drive business. How many engineers out there are starting brand new products? I bet there are more of you working on ten-year-old code bases than the fresh greenfield product.
Ask yourself, could you have defined the requirements that drive today's work ten years ago? Of course not. Today's requirements might not have even been known one year ago for some evolving products. TDD addresses the world that most developers find themselves in, a world where requirements and design must evolve.
Jack: Sure, but with embedded systems we have to be pretty clear about requirements from the outset, as these impact the hardware design. That can't change much during the project since changes can drive the recurring manufacturing costs to the point where the product is no longer viable.
James: Are requirements really that clear? I bet some might argue with you. In the early stages the hardware-software boundary must be considered. Committing too early is a problem too. Try to make boundary decisions reversible or finalize them when there is enough information.
My main interest is the software side of this discussion, but there are people being more agile with hardware design. The X-15 rocket plane, completely a hardware project, was developed using incremental design techniques. There are some very agile hardware architectures evolving as well, like the Cypress Programmable System on a Chip.
Jack: Let's talk about the heart of TDD: testing. TDD's automated testing is brilliant. I wish we all did more of it. But that runs into some problems in the embedded world. How do you automate tests when someone has to press buttons and watch displays?
James: At some level, a button press is resolved to a function call. A test case can make the same call as the button press event handler. On the output side, a simple test fixture can intercept messages destined for a display. I bet most if not all your readers have written a test mainly to exercise a new piece of code before integrating it. Likely that test fixture was discarded, once integration was complete.
We do the same thing, but we write the test code so that it plugs into the test harness and continues to preserve the correct behavior of the code long after integration. Never throw out the test fixture. We wire the test fixture into our suite of all tests.
TDD requires learning new skills and honing existing skills. Creating automated tests and testable designs is a challenge. There are new skills to develop to be effective in TDD. It is a skill to develop a testable design. Dependencies on hardware and operating system must be managed. Functionality must be divided into modules, and those modules must be tested independently.
I never worked at a car manufacturer, but I bet that the engine goes through some test before being bolted into the car. It's modular; it's testable. The engine is run in a test fixture before bolting it into the car frame.
Imagine if car manufacturers welded the engine into the car, only running the engine after the weld cooled. We're doing that when we create code that can only be tested on the target. A weld is not a very flexible interface. I'm afraid I see too much of that kind of embedded software, how about you?
It's kind of interesting; this is the same kind if objection some desktop and business application software developers express. The answer turns out be the same in principle: build modular code, testable code.
Jack: I have seen some way cool stuff being done with LabVIEW's vision module; people use a TV camera to watch the embedded device and see that it's doing the right things. But if we're doing tests on a mock, then we're really not testing the system; we're testing some simulation of it that may not have perfect fidelity. And that suggests that the very hardest tests - integration - are left to the end, or are not part of the central thesis of TDD, which is continuous automated testing.
James: That LabVIEW approach sounds pretty cool. Do they have robot arms pushing buttons too? Teams should definitely invest in automated system tests too. But while testing as a product is necessary, it is not adequate; the pieces must be tested too. If the pieces are not solid, the system won't be solid.
In complex systems, it is numerically impossible to test all the code paths and error cases when exercised as a fully integrated product. Now, at the unit test level it is possible to exercise all paths. If we only test at the system level, many code paths will go untried until the product bumps into that one-in-a-million event while in the hands of the customer. I think you have a story like this while you were out on the Atlantic with no land in sight.
Testing at the product level should be more about making sure the pieces play well together, as well as some ad hoc exploratory testing. The full 100% coverage ideal is only practical at the unit level. If you had two interacting components that each required 10 tests, they would require as many as 100 tests if they were integrated. That's not practical, but 20 tests and a handful of integration tests is.
Here's part 2.
Published January 5, 2010