When Models Fail: Bridging the Gap Between Analytical Models and Empirical Data
You have built a robust analytical model for a new medical device, built a prototype guided by the model outputs, and tested it. But you find that the experimental data doesn’t match the model-predicted response. Now what?
We are often tempted to ditch the model entirely and shift to a purely experimental, trial-and-error approach. However, this path can lead to wasted time and resources, with no guarantee of success. Instead, the better strategy is a dual path process: revisit and refine both the model and the experiment to uncover the root of the discrepancy and move forward with confidence. We do not want to lose sight of the objective – a robust, reliable, manufacturable, scalable, commercial device that fulfills needs. The analytical and empirical models are a means to the end – they are tools that help us get the job done. So, let us fix the tools and get on with the real job!
In this context, an analytical model is typically a fairly simple algebraic equation or small system of equations. For example, this may be a simple model for insulin flow rate, or aerosol sublimation, or cannula insertion speed. It is usually based on fundamental physics and engineering principles that we all learned in a BS program. A side benefit here is that the discipline of considering the abstraction of a real-world thing into a simple mathematical expression forces us to think about what matters and what does not.
We use tools like Excel, Mathcad, and Matlab. For example, in problems of structural mechanics or relative motion, we always start with a notebook sketch and a free-body-diagram. This compels us to think about and identify all the interactions (i.e., points of contact) and forces (like gravity). For heat transfer or fluid statics/dynamics, we start with a sketch and identify relevant potential, flux, and resistance terms. We make assumptions explicit and prefer to let the model tell us whether some term is significant or not. Once we have written the relevant equations, we code them using the tool of choice. We manipulate the independent variables and examine the dependent variables – do they make sense? Are trends and sensitivity reasonable?
When we understand the model response and are comfortable that it aligns with the desired response, we like to build a breadboard test system – typically something relatively simple that allows us to get more insight into a key function or attribute. Sometimes, the analytical and empirical models line up quite nicely – almost never exactly (that would be suspicious), but well enough to be pretty confident in both models. In the plot below, we see good alignment, and with a result like this, we can continue to use both models to evolve the design to meet requirements. A great attribute of the analytical model is that it is easy to make parametric updates and run again – typically much easier and quicker than updating an empirical model.

Diagnosing the Discrepancy
When faced with conflicting results, it is crucial to systematically investigate the potential causes of the disagreement. These may include:
- Faulty Data Collection: The data itself could be flawed due to errors in measurement or recording.
- Inaccurate Model: The analytical model may be oversimplified, missing key variables, or based on poor assumptions.
- Prototype or Experimental Design Issues: The physical implementation may not accurately reflect the system the model was designed to simulate.
- Measuring the Wrong Variable: Sometimes, the problem lies in focusing on metrics that do not fully capture the system’s behavior.
By addressing these possibilities methodically, you can avoid rash conclusions and prevent unnecessary abandonment of either model. (“All models are wrong but some are useful.”, G. Box)
Revisiting the Analytical Model
Start by examining the assumptions behind your analytical model. Explicit assumptions are easier to spot, but implicit assumptions—those you make without realizing—can often be the source of error. Ask yourself: Are there factors missing from the model? Are interactions or dependencies oversimplified?
A bounding case analysis can also help. Define a conservative range of expected results based on fundamental principles, such as energy in versus energy out. If experimental or analytical results fall outside this range, it might indicate a fundamental flaw in your model or understanding of the system. For example, if I have a load-bearing beam with a somewhat complicated cross-section – an I-beam, maybe – I can be confident that a rectangular beam with the same outside dimensions will deflect less than the I-beam in response to an external load. If a fully featured analytical model gives an output inconsistent with the bounding case, it is an immediate red flag.
Finally, consider testing alternative models. Sometimes, driving the system with a simple input (such as a step or ramp) can reveal weaknesses in the current model’s ability to handle dynamic behavior.
Reassessing the Experimental Design
Once the model has been revisited, focus on the physical setup and experimental procedures. Debug systematically, subsystem by subsystem, identifying potential environmental factors like temperature or humidity that might skew results.
Let me describe an example of a situation where we realized we had a disagreement between the analytical and empirical models. This related to a very sensitive electrical resistance measurement, and we needed reliable standards in the range of 15 milli-ohms to 25 milli-ohms. We had a simple series/parallel circuit model. We were convinced that we could construct a finely spaced set of standards using reasonable cost, low TCR, precision resistors that were coarsely spaced. We had a good measurement tool (Hioki) and probes that were right for the application. We had convinced ourselves that the busbars resistance did not matter (in fact, it was extremely low based on the trace width, thickness, and resistivity of copper) and that we should be immune from the ambient temperature (and self-heating) effect on the busbars and the resistors. However, our model and measurements did not agree. We reduced the system complexity and got good agreement at the single resistor level, so we had confidence in the measurement method, but we could not get any combination of series and parallel resistors to measure consistently. We built an LTSpice circuit model, which helped us understand what we observed empirically and guided us towards a circuit and PCB that used 4-terminal shunt resistors with distinct load and sense busbars, and included pads that we could solder bridge to compare sense and load signals easily. We ended up with a set of nicely spread standards that repeated within about one micro-ohm.
Collaborative Troubleshooting
When progress stalls, reach out to colleagues. Explaining your observations to others often reveals blind spots in your thinking. Experts from other disciplines can bring fresh perspectives and novel solutions to the problem.
Lessons from Failure
Sometimes, even after exhaustive troubleshooting, the model does not reasonably match the experimental data. In these cases, it may be necessary to refine the model, redesign the experiment, or start fresh.
For example, we can sometimes think about things as generalized “transducers” – a spring is a displacement to force transducer, a DC motor is a current to torque transducer. But a DC motor is imperfect when we recognize that there is internal friction, and a small current may not result in an output torque. Common torsion springs are angular displacement to torque transducers, but they typically have high losses between adjacent coils.
Moving Forward
The key to resolving these challenges lies in iterative, informed, and carefully considered refinement. We always strive to have a mental, and preferably a more formal, model so we can set an expectation for what we measure. When we see something that does not align with expectations, it’s an opportunity for an “aha” moment – a chance to revisit both analytical and empirical models and reconsider both. We can adjust analytical model assumptions based on new data and redesign experiments to isolate and test hypotheses. Above all, adhere to the scientific method: systematically address uncertainties and avoid jumping to conclusions.
Conclusion
In the end, bridging the gap between theory and experiment requires balance. Neither analytical models nor physical experiments are sufficient on their own. Progress comes from their interplay, each informing and helping to refine the other. We can resolve discrepancies and deepen our understanding of complex systems by embracing an iterative, informed, systematic approach. And get on with the job of designing and developing the medical device that fills needs and improves lives.
Source:MPO