Greg Stanley and Associates Performity LLC
  Home     About Us     Products     Services     Examples     Tech Resources     Contact Us 
HomeTech ResourcesFault DiagnosisFiltering >  Don't Always Filter >

When to Avoid Filtering - Don’t Always Filter!

Filtering topics:


This page describes when to avoid filters for diagnosis, and recognizing the unexpected presence of unwanted filtering.  This page is part of the section on
Filtering that is part of
A Guide to Fault Detection and Diagnosis.

Some failure modes must be detected with unfiltered variables

While filtering is usually desirable to reduce the effects of noise, it is important to realize that some diagnosis depends on time series analysis, on recognizing the presence or absence of noise, or unusual dynamic behavior, as a symptom of a fault. In many of these cases, the unfiltered data must be used.

For instance, to detect a frozen sensor value, we look for an unchanging value using a time series calculation such as standard deviation. We look for extremely low standard deviation values. But, filtering will also reduce the standard deviation and introduce serial correlation invalidating typical statistical test criteria. At the opposite extreme, we might detect certain sensor failures based on excessive high frequency noise. Or, we detect pump cavitation by looking for excessive high frequency, high amplitude variations in flow and pressure. Filtering masks these effects to an extent based on filter tuning. At best, this makes the task more difficult by confounding filter tuning with thresholds for event detection. Significant filtering may make that kind of analysis essentially impossible.

In other diagnostic applications, we need to draw conclusions based on the dynamic behavior of the system. An example is detecting bad controller tuning or faults affecting the process gain or lags. One symptom of interest is the presence of cycles while in closed loop control. While spectrum analysis (Fourier transform or autocorrelation function) could be used, even a simple standard deviation calculation will often be good enough to recognize this symptom, especially for variables like temperature that don’t tend to be noisy. But heavy filtering will make this harder to see.

As another example, a partly plugged sample line to a process analyzer must be detected based on dynamic behavior. This can be done when there is correlation between the analyzer result (composition) and process variables such as pressure and temperature. The analyzer is used in preference to values estimated through pressure and temperature because of the improved accuracy. But, there are delays introduced by the time to move material through the sample line. If the sample line is partly plugged, the material moves more slowly, increasing the delay between the analyzer reading and the estimate based on pressure/temperature. (A fully plugged line will be detected more easily based on an unchanging analyzer signal.) Similar effects might be used to detect excessive buildup of fouling material around a temperature sensor, if there are other sensors for comparison. (The insulating material introduces thermal lag as well as changing the steady state value). In theory, changes in autocorrelation functions might also be used for these sorts of analyses. But regardless of the method used, filtering changes the results of analyzing dynamic behavior by introducing additional lags.

Beware of filtering introduced by process data historians or process control system interfaces

For effective diagnosis, it is also important to recognize the “hidden” filtering present in process data historians and some process control system interfaces.  Data from process data historians is already heavily filtered through averaging of values and other data compression techniques, introducing serial correlation. Diagnosis based on time series analysis (e.g., standard deviation, rate of change, autocorrelation, and dynamic response) will probably not be possible. For this reason, avoid basing diagnostic systems on data that is already averaged and compressed in a process data historian whenever possible.

Similar problems occur when using data interfaces from process control systems that only transmit significantly changed variables (as discussed in changeband filtering later). That kind of filtering should always be turned off.

Copyright 2010 - 2020, Greg Stanley

Return to Filtering      Next: Filter Memory

Return to A Guide to Fault Detection and Diagnosis


Share this page:    Share this page by e-mailing link...