Taming Analog Noise

Here's a few ways to use smart software to reduce the amount of noise in your system.

Published in Embedded Systems Programming, October, 1992

By Jack Ganssle

Recently I wrote about smoothing digital data, particularly inputs from things like bouncing switches. It's just as important to smooth some analog inputs, particularly when working with tiny voltage levels in high noise environments.

One of the most important characteristics of analog noise is that it is generally distributed over a wide frequency spectrum. Switch on your PC near a shortwave radio and tune around - you'll hear pops and squeals virtually everywhere you listen.

A powerful noise reduction technique is to simply filter out all of the frequencies you don't need. In the case of a 60 Hz sine wave, feed the input through a 60 Hz filter, and all of the noise not exactly on frequency will disappear. If the noise is evenly distributed over, say, 10 Khz, and the filter passes only a 10 Hz bandwidth then - voila! The noise is reduced by three orders of magnitude.

This is how radios extract microvolts of signal from millivolts of noise spread across the spectrum. The radio's mixer combines the RF signal, which is a horrible jumble of thousands of real signals and gobs of atmospheric noise, with a sine wave at a frequency proportional to the one of interest. The result is fed through a narrow filter that passes only the frequency you've tuned to, removing nearly all of the noise.

DC Signals

What if you need to read a thermistor and compute very high accuracy temperature readings? Perhaps your system measures dissolved oxygen in a chemical bath, or light absorption at a specific wavelength in pineapples.

In each of these cases you are trying to measure what amounts to a static DC input - one that changes slowly with time. Noise might swamp the signal, but the signal itself stays stable for sizeable fractions of a second. How can we create the same sort of narrow bandpass filter for these DC signals?

I've been collecting algorithms for years (uh, decades), yet have found very little about how to smooth this sort of data. It seems the most common approach is to simply average a number of readings and then produce a result, but this simplistic method has a few problems.

The first is response time. If the software needs a data point NOW, it will have to wait for some large number of samples to be read and averaged before proceeding.

The second problem is the "smearing" of data implicit to any averaging scheme. The input presumably does change, albeit slowly. Averaging gives a perhaps incorrect value based on present and past readings.

Finally, averaging yields a declining return. Increase the number of averages by an order of magnitude and the noise goes down only by a factor of 2. For example, if 100 reads is still too noisy, going to 1000 reads will increase the collection time by 10 but only double the noise reduction.

To effectively remove noise from DC signals your code must balance averaging time (you can't average forever because an answer is needed eventually, and averaging over a changing signal distorts the result) versus noise reduction. Increasing the number of samples reduces noise but slows things down. Reduce sampling to get more accurate signal shapes and faster response, but expect more noise.

Remember averaging's three problems:

  • response time
  • signal smearing
  • declining return

Response Time

Thankfully, in an embedded system the firmware has full control over the hardware's interrupts. We can immediately improve the apparent response time of an averaging algorithm by programming an ISR to constantly read the A/D in the background. When the mainline code needs a value, the data is already accumulated in a register in memory.

The ISR can't read and average the input data, because it runs (presumably) asyncronously with respect to the code that needs the results. It's best to just have the ISR read raw data into a memory buffer, and let the main line code or some other task take care of applying the averaging algorithm to the data.

To avoid accumulating old data, the ISR should gather N samples in a FIFO buffer. These represent the most recent N readings from the A/D. Whenever the ISR takes a reading it drops the oldest sample from the buffer and adds the newest.

Whoever queries the buffer to get a reading then simply averages the N sample points. Once the buffer is initially filled, then the response time to a request for data is just the time taken to do the averaging.

It's important to clear the buffer when significant events occur. If the sensor assembly is active only when a lamp is on, for example, then be sure to reset the FIFO's pointers at that time. Use a simple semaphore to make a routine requesting data wait until the N samples are taken. Or, return the average of the number of samples accumulated until that number hits N. The first few readings will be noisy, but they will be more or less immediate.

The literature sometimes refers to the buffer as a "Boxcar". Boxcars are a special case of the more general averaging technique.

Signal Smearing

If the input does indeed move slowly with time, then any sort of averaging will smear the result. The output at time T is composed equally of the signal's behavior now and when the average started. In some cases the averaging distorts results excessively.

Having just almost sailed transatlantic (jeez... I hate abandoning ship in mid-ocean), one example that immediately comes to mind is an autopilot. The input to the computer is a quite noisy flux-gate compass. The firmware must smooth the noisy compass input, but must respond to changes in the boat's heading. Too much averaging will slow the autopilot's response down, making it ignore fast yawing in heavy seas.

One solution is to adjust the averaging algorithm so the current reading counts more heavily than a reading taken a few tenths of seconds ago. Using the data collected by the Boxcar FIFO, multiply each point by a declining coefficient inversely proportional to the point's age. For example:

	point N	1.00
	point N-1	 .90
	point N-2  .80
 	etc.

Sum the results. If you really need a true mathematical average of the input data, then compute a divisor to properly reflect the scaling of input data. Frequently in embedded systems we're measuring some oddball thing like voltage and scaling it to match a physical parameter, so the averaging divisor can often be left out and combined into the scaling algorithm.

Unlike a Boxcar average, where all of the data is treated the same regardless of its age, this does imply that the averaging routine must track the FIFO's insertion pointer to know what the most recent data is.

If all of the coefficients are exactly 1.0, then this technique is identical to Boxcaring.

Those of you familiar with image processing may recognize this technique as the Convolution. Convolutions are a powerful addition to the smoothing repertoire, and are often used even in smoothing AC signals.

Unfortunately, convolutions are no noise reduction panacea. The averaging balancing act once more comes into play. By using an ISR that accumulates data we've reduced response time, and by using the convolution we've reduced smearing. However, noise on the more recent signals contributes more to the result and "averages out" less. Well, as the saying goes, life is hard and then you die.

How should one pick the coefficients for a convolution? Looks at the response time of the analog electronics, the speed of the A/D, and the possible sampling rate of the ISR. Then, compute the worst case smearing the system can tolerate. For example: how much can the autopilot's firmware lag the boat's real position? Pick coefficients that will give this much smearing or less. If the system is still too noisy, you may have to speed up the A/D and ISR to get more samples to average. Brute force is ugly, but sometimes one has no choice.

I usually model the coefficients in a spreadsheet. Feed sample data in and plot the results to see just how bad the smearing effect will be. It's a quick way to check a formal analysis.

With all of the discussion in magazines now about neural networks I wonder if we couldn't use a learning algorithm to adjust these coefficients. If the system were "rewarded" for correct results, smart software could dither the coefficients until optimal results occurred.

Declining Return

We've looked at smearing and response time. The last problem with averaging is that of declining return; really noisy signals need lots of averaging, so acquisition time goes up much, much faster than smoothing results.

Frankly... I don't have a clue how to handle this directly. Is there a more effective smoothing technique? Please contact me if you know of one, because I know many engineers who are saddled with simple averaging and who are screaming for better results.

You can pre-filter the data stream. If the data is more or less DC, then noise may represent itself as point-to-point dithering. Sometimes you can reject those points that are not at least near the current baseline. The problem lies in establishing what the baseline should be.

If you have some a priori knowledge that no input should vary more than some percentage from the baseline (a not unreasonable assumption when working with a slowly changing signal), then you can compute an average and reject the outriders. I recommend the use of a sum-square computation, since most electronic noise is more or less Gaussian.

It takes a lot of compute time to perform such a rejection. Worse, sometimes the wrong data gets rejected! If you are forced to average over only a few points due to time or smearing problems, then any one really bad sample will throw off the whole average.

Another alternative is to try and fit a curve to the "noise". Instead of trying to model the input data (which, after all is more or less just a DC signal and really hard to do anything with), model the noise and remove it.

Given their random behavior, normal electronic sources of noise cannot be modelled effectively. However, often a lot of what we call noise is really some physical aspect of the system. In the case of the autopilot the biggest source of "noise" is the yawing of the boat induced by wave action. Waves are periodic. Some of the better autopilots remove this error by finding the Fourier series that represents the periodic yawing, modelling the yaw, and then subtracting this effect from the compass's output. Changing sea conditions will alter the autopilot's yaw model, so the firmware must constantly recompute Fourier coefficients in a background task.

Conclusion

Noise reduction eats up a lot of compute power on many embedded systems. We're pushing the state of the art on some of the analog sensors, so can expect noise issues to become even more important. Please feel free to contact me via Compuserve if you have algorithms or just war stories.