Settling time is the time necessary to converge to the final value of a step input, isn’t it?

Yes. For example you might say you need 90% settling in 100mS.

I want to remove the step inputs, and I can tolerate a big settling time

Words like "big" are useless in specs. How am I supposed to know what you consider "big"?

Anyway, let's take a stab at your filter. As Oznog mentioned, there are two broad classes of filters, FIR (finite impulse response) and IIR (infinite inpulse response). From a practical application point of view, especially for beginners, think of FIR as table-driven filters (the fancy mathematical word is convolution) and IIR as equation-implemented filters. I won't go into the pros and cons of each type here and just say that I disagree with Oznog and in your case a simple IIR (equation type) filter will be fine for your purposes, easier to implement, and I think easier for you to understand.

A simple single pole low pass filter can be realized by:

FILT <-- FILT + FF(NEW - FILT)

Where NEW is the new input sample each iteration, FILT is the running filltered value, and FF is the "filter fraction". FF = 0 is a infinitely "heavy" filter in that the output never changes. FF = 1 is no filter at all since the input is just copied to the output. Useful values of FF are obviously somewhere in between.

Look at this equation a bit and try to understand it. Each iteration NEW - FILT is the amount to add to FILT to bring it all the way to the input value. If you added this whole amount to FILT for each new value of NEW, then FILT would just follow NEW. This is why FF = 1 causes just a pass thru filter.

Now think what happens when FF = 1/2 as a example. Each time the output is brought only half way towards the new input. If everything starts out at 0 and then the input suddenly changed to 1, the first time FILT would be 1/2 (half way from its old value of 0 to the NEW value of 1). If NEW then stayed at 1 for subsequent iterations, the next time FILT would be 3/4, then 7/8, then 15/16, etc. If you plotted these values as a function of input sample iterations, you would see the output has a exponential decay approaching the input value. In this case the 50% settling time is one iteration and random input noise is attenuated by 2. Think of everything being 0 and then a single blip comes in with the value 1, then back to 0. This single input has a amplitude of 1, but the filter output will only go to 1/2, then decay back down to 0.

Now go back and look at the equation again. Note how it only requires a subtract, a multiply, and a add. The dsPIC can perform each of these in a single cycle. On other processors without such a nice multiply capability a common trick is to chose a value of FF that is 1/2**N, like 1/4, 1/8, etc. The multiply by FF can then be accomplished by a right shift of N bits. However on a dsPIC you don't need this trick.

Back to your problem. It looks like you are really just interested in reducing random noise, and are willing to put up with some finite settling time in return, although your value of 10mS makes no sense when you already said you don't care about frequencies above 10Hz, so I'll just ignore that.

To pick something as a example, let's see what happens when FF = 1/16. That gives you a 50% settling time in 11 iterations and a 95% settling time of about 47 iterations, which is 65mS with your 721Hz sample rate. FF = 1/8 would yield 50% settling time in 5 iterations (6.9mS) and 95% settling time in 23 iterations (32mS), but at only half the random noise attenuation of the FF = 1/16 case.

There is a fixed tradeoff between settling time and FF value for the filter shown above. However you can get faster settling at the same random noise attenuation (or more noise attenuation at the same settling time) by using more computation. This is done by concatenating multiple of the filters shown above in series. In other words, the output of the first filter becomes the input of the next. In fancy filter lingo, each individual filter according to the equation is called a "pole", and stringing multiple of them together is a multi-pole filter. The random noise attenuation of the whole multi-pole filter is all the individual FF values multiplied together. Due to the fact that the filters settle slowly, the settling times of multiple poles in series are overlapped somewhat. This allows for a faster overall settling time for the same combined overall random noise attenuation.

For example, using a two pole filter with each FF = 1/8 yields a total random noise attenuation of 64x, a 50% settling time of 12 iterations (17mS), and a 95% settling time of 35 iterations (49mS).

I use these types of simple IIR low pass filters routinely on periodic A/D readings. I often set up a periodic interrupt to read the A/D faster than the value is needed, then apply some low pass filtering to it. The final low pass filtered value is left around in a global variable that the foreground code can read whenever it wants the current input value. To make it easier to see the tradeoffs between the number of poles, FF values, and step response time, I have created the FILTBITS program and the PLOTFILT wrapper for it. FILTBITS calculates the unit step response given the FF value (actually expressed as the number of bits to shift, in other words N instead of FF in the equation FF = 1 / 2**N), and PLOTFILT shows a plot of the data. Both these programs are part of the PIC development tools release available at

http://www.embedinc.com/pic/dload.htm. See their documentation files for details. For example, here is the PLOTFILT output for the 2 pole filter with each FF = 1/8 as described above:

Edit: Removed thumbnail and substituted real plot image once it was uploaded to the forum server.

post edited by Olin Lathrop - 2007/05/15 07:24:46

#### Attached Image(s)