Extending Debugging Resources
Summary: Are we using the debugging resources available in so many MCUs in the most effective manner?
In the olden days processors just didn't have any on-board debugging features. This was a boon for tool makers, and kept me fed for many years as my company made in-circuit emulators. Early ICEs offered little functionality beyond a handful of instruction breakpoints, but over time they sprouted a wealth of features like data breakpoints, trace, profiling and much more. The 80s and 90s were halcyon days for the ICE industry.
That business is all but gone. Sure, pockets still exist. Microchip's REAL ICE and a handful of other products still keep a flicker of life in the emulator business. But very high bus speeds, tiny all-but-unprobable packages, and the staggering array of on-chip debugging features hollowed out that industry.
Multiple hardware break- and watch-points are now common on-chip, as is trace and much more. In the ARM market, of course, vendors are free to pick and choose (for a fee) from a variety of debug modules, or to have none at all.
How many of these resources do you typically use at a time when debugging? I bet the answer generally no more than a few.
I'd like the IDE vendors to offer a mode that automatically enables these resources to capture common problems. For instance, wouldn't it be nice if the tools always monitored stacks? Or watched large data structures for buffer overruns? Of captured null pointer dereferences by watching for accesses through location zero?
A user could provide the debugger with a script that configures each debugging resource independently when not explicitly in use. If the IDE sees that several data breakpoints aren't needed it would run the script to enable those in a useful fashion. The script would also have a message that is displayed if the breakpoint is taken to remind the engineer what violation occurred, as he or she may have set this up months before. RAM is not the only kind of memory that leaks.
Going a bit further, at ship time the script could be integrated into the code so production systems that experience a bug fail in a safe and predictable manner.
Of course, there's no reason we have to wait for the IDE folks to enable the last feature. We do often have this wealth of debugging transistors that almost always is left unused, just wasting power, when the system is "done." Why not set them up to look for common problems that we know will never happen. but all too often do? In many cases it's possible to program the control registers to toss an exception, rather that stop execution, when a breakpoint fires. Write a handler to take whatever action is appropriate, which could be to shut down, restart cleanly, or simply report the condition.
I call this "proactive debugging," and have been advocating it for years. Alas, the message has gone largely unheard. Other examples include seeding all of unused memory with software interrupt instructions, since when a program crashes it generally just wanders to a random location. So is arranging (when possible) the code and fixed data adjacent to each other with the stack and other variables above, so a single data breakpoint can capture any write to addresses that should never be written. There are a lot of ways to debug proactively, but they all require a mindset that recognizes that there will be bugs, so we need to seed the code with constructs that will automatically catch them.
When the 386 came out three decades ago almost everyone complained that its MMU took the memory segmentation we hated on the 8086 from a manageable four segments to thousands. Most programmed in a flat 4 GB address space (including Windows). They missed the point that the MMU made exquisite control of rogue programs easy. A tiny bit of code could watch for problems and take appropriate action.
Too many of us are still letting these debugging resources lay fallow.
What sort of proactive debugging do you employ?
Published February 25, 2014