Abstract: (6115 Views)
Mathematical models have the potential to provide a cost-effective, objective, and flexible approach to assessing management decisions, particularly when these decisions are strategic alternatives. In some instances, mathematical model is the only means available for evaluating and testing alternatives. However, in order for this potential to be realized, models must be valid for the application and must provide results that are credible and reliable. The process of ensuring validity, credibility, and reliability typically consists of three elements: verification, validation, and calibration. Model verification, validation and calibration are essential tasks for the development of the models that can be used to make predictions with quantified confidence. Quantifying the confidence and predictive accuracy of model provides the decision-maker with the information necessary for making high-consequence decisions. There appears to be little uniformity in the definition of each of these three process elements. There also appears to be a lack of consensus among model developers and model users, regarding the actions required to carry out each process element and the division of responsibilities between the two groups. This paper attempts to provide mathematical model developers and users with a framework for verification, validation and calibration of these models. Furthermore, each process element is clearly defined as is the role of model developers and model users. In view of the increasingly important role that models play in the evaluation of alternatives, and in view of the significant levels of effort required to conduct these evaluations, it is important that a systematic procedure for the verification, validation and calibration of mathematical models be clearly defined and understood by both model developers and model users.
Received: 2011/10/4 | Accepted: 2013/04/15 | Published: 2014/02/4