152,351 research outputs found

    The Validation of an Infrared Simulation System

    Get PDF
    A commonly-used term in the simulation domain is ‘validation, verification and accreditation’ (VVA). When analysing simulation predictions for the purpose of system solution development and decision-making, one key question persist: “What confidence can I have in the simulation and its results? ” Knowing the validation status of a simulation system is critical to express confidence in the simulation. A practical validation procedure must be simple and done in the regular course of work. A well-known and acknowledged validation model by Schlesinger depicts the interaction between three entities: Reality, Conceptual Model and Computer Model, and three processes: Analysis & Modelling, Programming and Verification, and Evaluation and Validation. We developed a systematic procedure where each of these six elements is evaluated, investigated and then quantified in terms of a set of criteria (or model properties). Many techniques exist to perform the validation procedure. They include: comparison with other models, face validity, extreme condition testing, historical data validation and predictive validation - to mention a few. The result is a two- dimensional matrix representing the confidence in validation of each of the criteria (model properties) along each of the verification and validation elements. Depending on the nature of the element, the quantification of each cell in this matrix is done numerically or heuristically. Most often literature on validation for simulation systems only provides guidance by means of a theoretical validation framework. This paper briefly describes the procedure used to validate software models in an infrared system simulation, and provides application examples of this process. The discussion includes practical validation techniques, quantification, visualisation, summary reports, and lessons learned during the course of a validation process. The framework presented in this paper is sufficiently general, so that the concepts could be applied to other simulation environments as well

    Computer modeling and simulation: towards epistemic distinction between verification and validation

    Get PDF
    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

    Pedestrian Flow Simulation Validation and Verification Techniques

    Get PDF
    For the verification and validation of microscopic simulation models of pedestrian flow, we have performed experiments for different kind of facilities and sites where most conflicts and congestion happens e.g. corridors, narrow passages, and crosswalks. The validity of the model should compare the experimental conditions and simulation results with video recording carried out in the same condition like in real life e.g. pedestrian flux and density distributions. The strategy in this technique is to achieve a certain amount of accuracy required in the simulation model. This method is good at detecting the critical points in the pedestrians walking areas. For the calibration of suitable models we use the results obtained from analyzing the video recordings in Hajj 2009 and these results can be used to check the design sections of pedestrian facilities and exits. As practical examples, we present the simulation of pilgrim streams on the Jamarat bridge. The objectives of this study are twofold: first, to show through verification and validation that simulation tools can be used to reproduce realistic scenarios, and second, gather data for accurate predictions for designers and decision makers.Comment: 19 pages, 10 figure

    A generic testing framework for agent-based simulation models

    Get PDF
    International audienceAgent-based modelling and simulation (ABMS) had an increasing attention during the last decade. However, the weak validation and verification of agent-based simulation models makes ABMS hard to trust. There is no comprehensive tool set for verification and validation of agent-based simulation models, which demonstrates that inaccuracies exist and/or reveals the existing errors in the model. Moreover, on the practical side, many ABMS frameworks are in use. In this sense, we designed and developed a generic testing framework for agent-based simulation models to conduct validation and verification of models. This paper presents our testing framework in detail and demonstrates its effectiveness by showing its applicability on a realistic agent-based simulation case study

    Computer modeling and simulation: towards epistemic distinction between verification and validation

    Get PDF
    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

    A Petri Net Approach to Verify and Debug Simulation Models

    Get PDF
    Verification and Simulation share many issues, one is that simulation models require validation and verification. In the context of simulation, verification is understood as the task to ensure that an executable simulation model matches its conceptual counterpart while validation is the task to ensure that a simulation model represents the system under study well enough with respect to the goals of the simulation study. Both, validation and verification, are treated in the literature at a rather high level and seem to be more an art than engineering. This paper considers discrete event simulation of stochastic models that are formulated in a process-oriented language. The ProC/B paradigm is used as a particular example of a class of simulation languages which follow the common process interaction approach and show common concepts used in performance modeling, namely a) layered systems of virtual machines that contain resources and provide services and b) concurrent processes that interact by message passing and shared memory. We describe how Petri net analysis techniques help to verify and debug a large and detailed simulation model of airport logistics. We automatically derive a Petri net that models the control flow of a Proc/B model and we make use of invariant analysis and modelchecking to shed light on the allocation of resources, constraints among entities and causes for deadlocks

    Some Issues in the Testing of Computer Simulation Models

    Get PDF
    The testing of simulation models has much in common with testing processes in other types of application involving software development. However, there are also important differences associated with the fact that simulation model testing involves two distinct aspects, which are known as verification and validation. Model validation is concerned with investigation of modelling errors and model limitations while verification involves checking that the simulation program is an accurate representation of the mathematical and logical structure of the underlying model. Success in model validation depends upon the availability of detailed information about all aspects of the system being modelled. It also may depend on the availability of high quality data from the system which can be used to compare its behaviour with that of the corresponding simulation model. Transparency, high standards of documentation and good management of simulation models and data sets are basic requirements in simulation model testing. Unlike most other areas of software testing, model validation often has subjective elements, with potentially important contributions from face- validation procedures in which experts give a subjective assessment of the fidelity of the model. Verification and validation processes are not simply applied once but must be used repeatedly throughout the model development process, with regressive testing principles being applied. Decisions about when a model is acceptable for the intended application inevitably involve some form of risk assessment. A case study concerned with the development and application of a simulation model of a hydro-turbine and electrical generator system is used to illustrate some of the issues arising in a typical control engineering application. Results from the case study suggest that it is important to bring together objective aspects of simulation model testing and the more subjective face- validation aspects in a coherent fashion. Suggestions are also made about the need for changes in approach in the teaching of simulation techniques to engineering students to give more emphasis to issues of model quality, testing and validation

    Model Verification and Validation Strategies and Methods: An Application Case Study

    Get PDF
    Model verification and validation is an essential part of any modeling and simulation process. Many literatures report model verification and validation strategies and methods in a theoretical framework, but there is limited literature on the application of such strategies and methods to real-world simulation problems such as manufacturing operation system simulation. This research aimed to bridge the gap for a simulation case study from a large UK-based manufacturing company. This paper demonstrates model design and validation processes using a manufacturing case study, explains model verification and validation concepts, and demonstrates the application of a model verification and validation architecture. The emphasis of this paper is on model verification and validation strategies and methods. An example application of such strategies and methods to the verification and validation of a manufacturing operation system simulation case study is presented

    Some Issues in the Testing of Computer Simulation Models

    Get PDF
    The testing of simulation models has much in common with testing processes in other types of application involving software development. However, there are also important differences associated with the fact that simulation model testing involves two distinct aspects, which are known as verification and validation. Model validation is concerned with investigation of modelling errors and model limitations while verification involves checking that the simulation program is an accurate representation of the mathematical and logical structure of the underlying model. Success in model validation depends upon the availability of detailed information about all aspects of the system being modelled. It also may depend on the availability of high quality data from the system which can be used to compare its behaviour with that of the corresponding simulation model. Transparency, high standards of documentation and good management of simulation models and data sets are basic requirements in simulation model testing. Unlike most other areas of software testing, model validation often has subjective elements, with potentially important contributions from face- validation procedures in which experts give a subjective assessment of the fidelity of the model. Verification and validation processes are not simply applied once but must be used repeatedly throughout the model development process, with regressive testing principles being applied. Decisions about when a model is acceptable for the intended application inevitably involve some form of risk assessment. A case study concerned with the development and application of a simulation model of a hydro-turbine and electrical generator system is used to illustrate some of the issues arising in a typical control engineering application. Results from the case study suggest that it is important to bring together objective aspects of simulation model testing and the more subjective face- validation aspects in a coherent fashion. Suggestions are also made about the need for changes in approach in the teaching of simulation techniques to engineering students to give more emphasis to issues of model quality, testing and validation
    • 

    corecore