Greg Stanley and Associates
 Home   About Us   Products   Services   Success Stories   Tech Resources   Contact Us 
HomeTech ResourcesFault Diagnosis > Compiled Models >

Compiled vs. First Principles Models

Diagnosis Subtopics:

 

This page examines compiled vs. first principles models, as part of the section on
Model Based Reasoning that is part of the white paper
A Guide to Fault Detection and Diagnosis.

“First principles” models are often engineering design models, reflecting physical laws such as mass balance, energy balance, heat transfer relations, and so on. Or, even qualitative models such as causal fault propagation models or state transition diagrams can be considered as “first principles” models if they are based on physical laws or device implementation knowledge, rather than primarily on data. These models are also often referred to as using “deep knowledge”.

Compiled models are simplified and more compact. Examples include empirical models derived directly from data, and simplified versions of more complex models.

Empirical models

Empirical models are developed through the use of data derived from tests or from the output of more detailed and exact simulation models. Simple curve fits and regression models are examples of empirical models most familiar to everyone. But, other techniques for building models such as neural nets or the technique at the heart of Smartsignal products are also empirical, involving “training” with data. These types of models are also referred to as using “compiled knowledge”, or as “shallow knowledge”. While they may represent some of the same knowledge as first principles models, the models are generally not explicit, and hence cannot be easily inspected for accuracy or completeness. 

Empirical models have an advantage in that standard procedures for data collection and model construction can often be automated, so that less application development time and engineering analysis are needed. But training of the application developers is still needed, so that they understand and recognize the occurrence of pitfalls and limitations for the particular methods used.

Empirical models should be limited within the ranges of data used in their development, unlike first principles models that may extrapolate well beyond the range of test data. For instance a first principles model may state that the total flow into a unit equals the total flow out of a unit. That model is a straight line on a graph when plotting flow in vs. flow out with measurements taken at different times. If a regression second-order or higher curve fit were developed using actual noisy measurements to “learn” this relationship, it would probably be close to the straight line within the range of the data. But the measurement errors captured in the empirical model will lead to huge errors when that curve is extrapolated outside the range of the training data. Neural nets used as function approximators (with the exception of RBFN - Radial Basis Function Networks) are especially vulnerable to extrapolation errors, because they don’t provide a warning when extrapolation is occurring. RBFN can provide these warnings.

Empirical models typically have a disadvantage that they need to be rebuilt when there are changes in the monitored system configuration or operating modes. They also typically require more data analysis for re-use on instances of similar equipment than first principles models. But this can be partly ameliorated by “normalizing” variables like flow, volume, and power into dimensionless variables (translating to variables divided by their maximum values, for instance). Process equipment of different sizes may have very similar behavior when translated to dimensionless form. Selection of variables in the empirical models is often best done by people with a good understanding of the first principles models.

In reality, most “first principles” models also involve a large number of parameters that have been fit with data, such as those used in calculating physical property values. But at least the ranges of values are usually well known and defined. To help empirical models extrapolate better, people can preprocess data through first principles models in front of the empirical models. For instance, the empirical models can model deviations from first principles models, to help them extrapolate better, creating a hybrid approach.

Simplified versions of more complex models

Compiled models may also be generated as simpler versions of first-principles models (or any more complex models). This is done to create a model that is more compact or easier to use at run time. This generally implies some irreversible loss of information contained in the model, which can affect the results.

As an example, with the SMARTS software, a causal model is developed. The system compiles the causal models into a set of fault signatures for use at run time. In that example, causal models where a single root cause feeds a downstream set of nodes in series or in parallel will result in the same signatures. The ability to draw conclusions with missing data, multiple faults, or time delays is compromised by the compilation in this case. However, the ability to easily handle conflicting data is improved through finding fault signatures that are the “closest” and that are also “close enough”. There is a natural measure of the distance between a fault signature and the observed symptoms -- the “Hamming distance”. The “Hamming distance” between two fault signatures is simply a count of the number of symptoms that have different (binary) values.

Copyright 2010 - 2013, Greg Stanley

(Return to Model Based Reasoning)

 

Share this page: Share this page via LinkedIn... Bookmark or share this page on Delicious... Share this page by e-mailing link...    

 Home   About Us   Products   Services   Success Stories   Tech Resources   Contact Us 
Enter keywords to search this site:
Now operated by Performity LLC