A ton of people replied to this article in the last Muse. Most said they write little Assembler code but read a fair amount. And there was some discussion about whether "Assembler" should be capitalized. I bow to the general consensus that it should be... and wonder where the word came from. Here are a handful of reader replies.
Dan Daly wrote:
|
Rarely these days do I ever write assembly, but I still read quite a bit of it from time to time. I liken an understanding of assembly language (or at least an active awareness thereof) to a similar understanding of the schematic diagram of the circuit of which the microcontroller or SoC is a part. Both are important parts of the working of the system. Is that GPIO line your code's controlling pulled up, or down? How long does that switch bounce before it settles? Is unrolling that loop really saving you processor time? Did the compiler perform the optimization you were expecting?
The basics still matter and always will. It's tempting to gloss over the paint-a-fence, sand-a-floor, wax-on-wax-off basics, and some may get away with doing so for a while, but down that path lies shallow knowledge, fragile designs, and poor decisions. Good, curious practitioners will latch onto one processor architecture and get to understand it, and then probably move onto another. It's kind-of addictive, at least it was for me.
There's another, rather intangible benefit to knowing the basics: you shed the belief that it's turtles all the way down. When one understands the system all the way down to the metal, no problem becomes inaccessible. Visibility into the fault or failure may not come as easily as laying a cursor on a variable in an IDE to get its value, but the answer is there, nonetheless, if you know how to go low enough.
And yes, I'm very fond of working right down on the metal. |
From Chris van Niekerk:
|
Your article on Assembly (yes, with the capital it deserves) programming brought back lots of memories.
Back in the days you counted the number of clock cycles an instruction took to execute, trying several loop options to see where you can shave off a few cycles to speed it up. Knowing the hex opcode of each Z80/8080/8085 processor instruction, worrying about propagation delays and rise times of address multiplexers for I/O and memory, rise/fall times of signals, that was fun!
Nowadays, your CPU is a MCU and all the fun parts are included, and more!!
I do agree that from a productivity perspective, high level languages must be used. Having said that, understanding the processor architecture and underlying operations of how a call works, stack operation and parameter passing to a function, makes you write much more efficient code than someone not having this understanding.
Colleges and universities teach all the fancy features of high level languages, which is absolutely great when working on a PC, or faster and bigger machines, where processor speed and memory is not in short supply, but when some programmers are let loose on embedded systems, the code is neither compact, not efficient.
I use Arm processors nowadays and must confess that I am not fully aware of all the nuances of the hardware, but the basic understanding of the nuts and bolts of a processor, certainly helps to write efficient code. I still look at the assembly code produced by the compiler on the report files from time to time, just for old times sake. |
James Thayer, like your editor, goes back to the early days of this field:
|
As a child of the early days of programming computers by banging rocks together, I've really been enjoying your trips down memory lane.
My degree was in "Electrical and Computer Engineering" with a specialization in Computer Architecture. I found (and still find) computer architecture to be a fascinating subject but I never did actually work professionally as a computer architect. The closest that I came was a senior class project where we designed and implemented a CPU with 74-series TTL. We were allowed (encouraged) to use the 74181 for the ALU which vastly simplified things, but even so, the very essence of creating this CPU was implementing an instruction set (an assembly language of our own creation) in hardware.
As I went into my professional career, it was only a couple of years before I was writing as much code (8080 and Z80 assembly) as I was designing electrical circuits. Fun stuff like controlling servo motors and actuators. There's nothing quite like being able to visualize your code by watching mechanical objects in motion (and where a bug in the wrong spot could damage thousands of dollars precisely machined metal components...and/or send someone to the hospital.)
After a few years in the land of Forth, an interesting language in that it can be -- and has been -- implemented directly as a computer architecture, I came back to the "mainstream" of C/C++. I wrote software for Windows for a few years before coming back to embedded software. The company I worked for was developing in C++ for 68040 and 68060 embedded systems. By then, although the software was embedded it didn't feel like an embedded system. There was an OS, a file system, and drivers for the hardware. One might as well have been writing software for a PC. The only thing missing was a keyboard and a display (and if you plugged in a terminal -- which by then, meant a PC running XTERM or similar -- you had a rudimentary keyboard and display.)
In regard to the "new distance between engineers and the underlying hardware: One day, a colleague asked me for help on a problem he was having. His code would run up to a particular line of code and then go out into the weeds. Looking at his code, I could see nothing wrong so I said "Show me in the debugger." I watched as he stepped through the code line by line. Sure enough, there was a point where everything looked fine but if you stepped one more line, boom, you were in never-never land with no way to find your way back. So I said "Let's see what's really happening." as I flipping the debugger into assembly mode. To my colleague, we had just taken portal into a strange and wonderful universe that he had never seen before. We ran the software again and watched as the processor methodically set up the conditions for a one way trip into the void. From this perspective, it was blatantly obvious what was wrong with the code and it was easily fixed. But with no background in looking at the machine, my colleague could have scratched his head for weeks trying to figure it out.
These days, I don't use much assembly anymore (If I do, it'll be a few lines to configure hardware for an Arduino project). Programming on top of bare metal was the most fun I ever had in my career but the opportunities to do that these days are few and far between. I am mostly retired now and while I still work with bare metal, most of the time I use fire and a hammer to "program" it. (I have regressed hundreds of years from the cutting edge of 21st century telecommunications technology to forging iron.) |
Tom Mazowiesky often writes and sent this:
|
Wow, Assembly! I did the exact same thing you did when I was in HS, I designed a Fortran based computer too. All of my early embedded development was in assembler (6800, 8085,Z-80 & 80x86). I also combined a TRS-80 & some custom hardware into a board tester for a company that built high speed dot matrix printers. Most of the code was in BASIC, but I had to make up a couple of routines in assembly to improve the speed.
In those days memory size and speed were so limited compared to today's processors that it forced us to stay with assembler. I do agree that we all had to have a good understanding of the CPU itself in order to write good code. However, back then the micro's were just that, CPU's only. Very few built in peripherals were available in that generation. You had maybe a 200 page manual, half of which was dedicated to the assembler instructions, but when you read it, everything was clear. I use the Atmel SAM4S these days, 1400 pages for the CPU alone, and while it's OK, it can be pretty sketchy sometimes on what it really is doing and how to program it. An incredible number of registers for all the various peripherals. I think you could spend 20 years programming it and still not understand it 100%. Atmel provides example code modules, but I think Silicon Labs examples are much better. Each peripheral on the chip has a simple example that explains all of the operation of the unit. They are well documented and are easy to use as models for development work.
C and similar languages have definitely made it easier to develop code, but it's disheartening to see that very little thought sometimes goes into development. I've made a decent career out of consulting for companies that developed poorly performing, or non-working designs and fixing them. While the engineers are well learned to some degree, they seem to lack understanding of basic concepts such as data hiding, modularization, etc. I've waded into too many projects where the code is all in one file; or multiple files with all the variables public and extrn'ed, identical variables across multiple files - ugh. While copy and paste is a great idea, coming across the same function repeated over and over in multiple files is annoying.
It's definitely a different world now than 40 years ago when I started working. |
Steve Paik's insights are worthwhile:
|
This muse hit a couple chords with me. My quick thoughts:
I started my career in the late 90's, and have benefited from the plethora of great C compilers available. I have only written a few lines of assembly in that time. I find that assembly has two major strong points for me;
1) Bootloaders. It's impossible to write a bootloader when you don't have a C runtime environment. I had to write a couple of these from scratch and I learned a lot about what happens from the moment power is applied to the first line of main(). This is one of my favorite interview questions for embedded developers, asking applicants to walk me through the process. I find many folks claim "bootloader" on their resume, but what they really mean is "I configured the bootloader that XYZ company supplies with their chip" and very, VERY few people can describe the steps involved in setting up a C runtime. This also touches on the role of the linker, and I find too many embedded folks that have no idea how the linker works and how powerful it is.
2) Subtle algorithm inefficiencies. Compilers these days are extremely powerful and amazing, but the old adage "garbage in equals garbage out" still applies. I've learned that humans make a lot of implicit assumptions when programming, and these assumptions are NOT properly coded and fed to the compiler. Thus, the compiler isn't aware of these things and cannot optimize for them. In the worst cases, the compiler violates the human's assumptions (i.e. the problem is under constrained by the human) and this creates subtle bugs. This usually shows up as some form of memory aliasing where the compiler cannot assume a memory location doesn't get clobbered as a side effect of a loop operation, and thus has to dereference pointers instead of caching values. I find hotspots during profiling, and inspect the assembly to infer what the compiler was thinking and how that differs from what the human intended. |
Tony Garland dives into Assembler when needed:
|
One more comment on this . . . Yes, I recently had to roll up my sleeves and wade into the assembler waters--for diagnosing/capturing errors in an industrial product for a client. The client contacted me out of concern for low reliability of a product. It turned out there were various issues, but also none of the processor fault vectors were being routed where they could be diagnosed. Instead, a default (SDK) implementation left each fault routed to independent "loop to self forever" assembler snippets. The system had a WDT--which would then fire and restart the system. But there was no way to really tell how or why the WDT tripped--just that it did and the product "recovered." An improvement I made was to route the fault vectors to a handler that diagnosed the cause and provided information about both the cause and location in the code where the the fault originated. Tough to do something like that entirely in C--there's a need to look at status register, link register, and the like. This code can then route to a routine written in a high-level language, but interfacing the faults and sleuthing their meaning is still a job for assembler.
Once we had more visibility, it didn't take long to find some egregious culprits (related to the use of non thread-safe dynamic memory in an environment with callbacks at multiple priorities). The WDT timer earned its keep too: another problem was users of of malloc/free (yes, an embedded system) were doing illegal things: allocating a large piece of memory, splitting it into pieces, handing out the pieces, and then the handed-out pieces were individually handed back to free() :-( No concept on the part of the original author that "you can't do that" because some (most?) flavors of malloc prepend housekeeping information in the piece they had out (which is slightly larger than the caller requested) and returning pieces that didn't match the originally-allocated piece left the underlying allocation accounting in a shambles. All sorts of problems will occur--even situations where one calls to malloc() or free() and the calls never returned--which is where the WDT earned its keep!
The callback/priority use of malloc/free was worked around by decoupling the events which triggered operations leading to memory allocation from the memory allocation itself. Interposing an atomic buffer such that all the malloc/free operations now take place at a single priority level ("single-threaded" in the main loop).
What this also points out: there are more and more folks doing "embedded development" who lack any exposure to the very considerations which make embedded development different than generic software. They bring their generic SW environment down to the embedded world and quickly step into doo-doo. The ease with which vendors have made it possible to take an SDK, run a GUI configuration tool, tweak some example code from and SDK and *think* you are 80% done also contributes to the problem. |
Daniel McBreaty bridges comments on Assembler with another Muse article on taking charge/continuous learning:
|
At my last job I worked for a guy who did very well thought out, hard real time, projects in 68k assembler. Stepper motor control, reading various sensors, driving an LCD and so on. He had a kind of structure (obviously) which he followed very strictly and his systems worked very reliably for years. His main motivation was that this was what he already knew, and he felt he didn't have time to get into C (I tried to persuade him several times). A shame, as he would have been excellent, with his deep understanding of the way a processor works. (He only hit his limit when he tried to code a TCP/IP stack like this - no joke.)
At that same company, I found myself doing my first FPGA project, under some time pressure. I picked up Verilog quickly enough, and was able to simulate and build the pure hardware aspects of the project well enough. But for communication with the control board, I wanted a softcore processor, and at that time I found the Xilinx tools and licensing requirements for SDK quite confusing. I ended up doing the control side on the picoblaze core, which is very well described in UG129. Here, assembler (with a few macros) is about all you have, so that's what happened. It turned out to be a lightweight and stable solution. It was also (for me) an excellent entry point into this world (these days I'm busy with microblaze/zynq SOC, for another company).
Regarding bare-metal : a simple task scheduler controlling state machines, all tasks required to run to completion, all errors considered fatal, is still my "go to" approach for hard real time control. (In C, perhaps in tandem with custom hardware on an FPGA.) Scalable and "just simple enough".
By the way - I think I can hold this up as an example of "continuous learning" (other part of your newsletter). I needed solutions to problems that I was asked to solve, and researching them led me down the track. There was some risk and pain at the start, but the approach paid off. Of course I made some dumb mistakes at the start, but that's how we learn. I got a lot of help (and the odd flame!) from people on StackExchange, online courses, various Xilinx forums and soon. These days, the info is out there, there is no excuse not to be curious about how to use it. |
|
Plenty of email also came in regarding the article in the last Muse about Taking Charge. Some highlights follow.
Neil Cherry wrote:
|
> In one of the made-for-TV pseudo-histories of the Apollo program, an
> astronaut figured landing on the moon would be something like flying
> helicopters. So he learned to fly choppers. He didn't ask permission.
> His boss wasn't involved. No one told him to acquire this new skill.
> In the every-man-for-himself astronaut office astronauts took /action/ to fill holes in their experience.
Oh wow, another of my pet peeves!
When I was in college in the early 80's I learned that if you wanted to learn interesting things you had to do it on your own. College electronics taught the basics of digital and analog but not a lot you could do with them. CompSci courses taught you to program but one embedded computer taught us to link the electronics with the programming (8085 Assembly language btw ;-) ). Any research I did was in a search language call Dewey Decimal. ;-)
So my friends went off and learned audio electronics but I learned about computers.
Everything I could, languages, processors, protocols, electronics, I devoured it by the book load. A few years later and after a few jobs I was learning network protocols just before networking took off (PUP, XNS, NetBeui and many more). I accidentally stumbled into a learning pattern that worked well until the mid-2000s. I had worked in a place where we could pick our training but our employer dropped paying for it or even giving us guidance on what they thought was valuable. I was adrift in the ocean of subjects with no guidance. I was fortunate that my hobbies aligned with IoT (my home automation interests - Thanks Steve Ciarcia & Byte). Today I work for another company in Quality Assurance and I'm again inundated with new learning that will probably change in about 18 months. We're now doing Agile, CI (continuous integration), CD (continuous development), Test Automation and a lot of other topics just trying to keep ourselves up to speed. All of this learning is on my own time. The Agile schedule doesn't leave time for breathing (or to think) when features need to be in sync across domains. I don't think I've ever had to learn so much in such a short period of time in my life.
If I had to learn my college material at the pace I need to keep now. I would have had my 2 degrees in 1 year. At least I'm not shelling out the $$$ for the course this time.
But I've really had to learn to do a lot of time and information managing also. There appears to be no courses on this kind of information management. At the moment it's Emacs/Org-mode and electronics notes, drawings and videos.
I've gone the route of sharing my notes with others but I don't seem to get a lot reciprocated. I work with people from other cultures. I wonder if it's a cultural thing. It won't stop me as I've learned that by sharing I can learn more. I have tons of open source projects that helps with experience. Knowledge without experience is half learned. |
Steve Paik sent this:
|
When I graduated college, I had a lot of anxiety finding my first job. I spent a lot of effort avoiding going into the "real world" and had even applied (and got accepted!) to medical school. Upon deep reflection, I realized that my entire life had been "scripted" by school requirements. I.e. I knew what I had to do to graduate high school, college, etc. Life doesn't have that script, and the unknown was stressing me out. I decided from then on, that I would commit to learning something new every year, to build my skills and feed my curiosity. I try to take a course, get a certification, etc. This isn't immediately applicable to my current job, but I find having a breadth of experiences helps prepare for the next job (you never know what it will be!) and it also helps to find common ground with people around me. Some things I've done in the past:
- EMT basic certification
- motorcycle license
- studied Japanese and Chinese languages
- private pilot license + extra ratings and training
- ham radio license
I have a long bucket list of things that I still want to do, so there will be no shortage of things for me to do in retirement! |
Finally, a reader who wishes to remain anonymous poses an interesting question:
|
What would you (or other readers) recommend for someone who is actively trying to expand their knowledge for all things, but ends up pigeonholed into a single discipline? I dislike the idea of specializing; I've strived to learn enough about ME, EE, and CS to see the full picture of the solution to any given problem. I've learned what to put in hardware, what to put in software, and where to place it all on the PCB so that it can fit neatly in the enclosure and not violate the principle of least surprise of any of those. But also, I get bored when I'm not able to really exercise all three of those major groups for a project. I'm finding that my current employer, as well as any embedded engineering job openings I've come across, really only ever want specialization of a single discipline (unless it's some kind of startup and they are looking to hire someone who can do everything and answer the phones). As I get older, I'm finding it more difficult to put in the personal time to do all of that (no more 24+ hour weekend binges if I need to get to work on time on Monday and be productive). This has caused me to be more "critical" of my personal projects. While I do them for fun and learning, I've seen a personal trend of me chopping the fun out in favor of more learning to stay on top of new skills. Thus, my personal projects become almost like unpaid work, but still end up doing nothing to advance my career because my employer (or other potential employers) really only want me to do one thing and do it well.
Do I stagnate at work in favor of more fun personal projects? Do I keep on this route and burn myself out by not taking on fun projects but getting better at the specialization that's wanted of me? Do I spend time looking for a new job? Do you even know of any job descriptions that would even want one person to be actively involved in more than one discipline? (Not asking for a job opening there, I mean that more as an "am I job hunting with the wrong terms" question). Many of my coworkers do have cross discipline knowledge, but enjoy and actively specialize in one specific aspect and that seems to be the norm of embedded systems. |
|