Two different metrics quantifying model and simulator predictive capability are formulated and evaluated; both metrics exploit results from conducted validation experiments where simulation results are compared to the corresponding measured quantities. The first metric is inspired by the modified nearest neighbor coverage metric and the second by the Kullback-Liebler divergence. The two different metrics are implemented in Python and in a here-developed general metamodel designed to be applicable for most physics-based simulation models. These two implementations together facilitate both offline and online metric evaluation. Additionally, a connection between the two, here separated, concepts of predictive capability and credibility is established and realized in the metamodel. The two implementations are, finally, evaluated in an aeronautical domain context.
The research was funded by Vinnova and Saab Aeronautics via the two research projects EMBrACE and the NFFP7 project Digital Twin for Automated Model Validation and Flight Test Evaluation.