On Testing

Summary: Testing is crucial. And flawed.

Bruno Soko's recent article (http://www.embedded.com/electronics-blogs/say-what-/4439669/Test-Automation-vs-Manual-Testing-in-Software-Development-) "Test Automation vs Manual Testing in Software Development" is worth a read. In it he lays out the tradeoffs between the two approaches.

Personally, I think that for most products we should automate as much testing as possible. The old adage still holds: If you have to do something once, do it. If you have to do it twice, automate it. But some things are tough to delegate to a machine. Watching and interacting with a UI is an example.

But some people are doing very clever things. Some get LabVIEW with its vision module. They aim a camera at a control panel or even a screen and use LabVIEW to dissect the visual field, returning elements as text items. It's possible to sometimes close the testing loop this way.

Bruno didn't mention holding tests to objective standards. We have no idea how to figure how many tests cases are needed, but we can compute the minimum. Cyclomatic complexity gives us a hard number: if you don't have that number of tests, then, for sure, the system is not being completely tested.

Testing is a hugely important activity. But it suffers from three flaws:

 Outside of agile shops testing takes place at the end of the project. The project is late, of course, so what gets cut?

 It doesn't work. Plenty of studies have shown that, absent code coverage metrics, the average test regime only exercises half the code. Exception handlers and special cases are notoriously difficult to test. Deeply-nested Ifs lead to mind-boggling numbers of testing permutations.

 Test is the wrong way to find bugs.

To elaborate on the last point, we have to first "man up" and admit that we are flawed firmware developers who will produce code with all sorts of defects. That's just the nature of this activity. Given that we know we will inject bugs into the code (and maybe a lot of them), and that testing is flawed, wise developers will employ a wide range of strategies to beat back the bugs.

I like to think of generating high-quality software in terms of a series of filters. No one filter will have 100% efficacy; each will screen different percentages of problems. Used together the net effect is to remove essentially all of the bad gunk while letting only the purest and most pristine code out the door.

This is not a new idea. Capers Jones lists over 100 filtering steps (and analyzes the effectiveness of each) in The Economics of Software Quality. He makes it clear that no team needs to use all of these, of course; some, for instance, are for dealing with data bases; others are for web page design.

The compiler is a filter we all use. It won't generate an object file when there are bugs that manifest as syntax errors. Unfortunately, it will happily pass a binary to the linker even if there are hundreds of warning messages. It's up to us to be disciplined enough to have a near-zero tolerance for warnings.

(I remember using the RALPH Fortran compiler in 1971 on a Univac 1108. If there were more than 50 errors or warnings in your card deck it would abort, print a picture of Alfred E. Neumann with the caption "This man never worries, but from the look of your program, you should.")

Do you use Lint? You should. Though annoying to set up, Lint will catch big classes of errors. It's another filter stage.

Static Analysis is becoming more important. The tools cost too much, but they're cheaper than engineering labor. Another filter.

Test, of course, is yet another filter. Or rather, another series of filters. Unit test, integration test, black-box test, regression test, each serves to clean up the code.

But these filters won't work well unless you monitor their effectiveness. What percentage of the bugs sneak through each stage? If test is finding 90% then there's something seriously wrong with the earlier filters, and corrective action must be taken.

Engineering is, after all, about the numbers.

What filters do you use?

Published June 12, 2015