30,070 research outputs found
Observations for Assertion-based Scenarios in the context of Model Validation
Certain approaches to Model-Based Testing focus on test case generation
from assertions and invariants, e.g., written in the Object Constraint Language. In
such a setting, assertions and invariants must be validated. Validation can be carried
out via executing scenarios wherein system operations are applied to detect unsatisfied
invariants or failed assertions. This paper aims to improve our understanding of
how to write useful validation scenarios for assertions in OCL. To do so, we report
on our experiences during the creation and execution of 237 scenarios for validating
assertions for the Mondex Smart Card application. We also describe key factors that
must be considered in transforming scenarios into test cases
Validating Predictions of Unobserved Quantities
The ultimate purpose of most computational models is to make predictions,
commonly in support of some decision-making process (e.g., for design or
operation of some system). The quantities that need to be predicted (the
quantities of interest or QoIs) are generally not experimentally observable
before the prediction, since otherwise no prediction would be needed. Assessing
the validity of such extrapolative predictions, which is critical to informed
decision-making, is challenging. In classical approaches to validation, model
outputs for observed quantities are compared to observations to determine if
they are consistent. By itself, this consistency only ensures that the model
can predict the observed quantities under the conditions of the observations.
This limitation dramatically reduces the utility of the validation effort for
decision making because it implies nothing about predictions of unobserved QoIs
or for scenarios outside of the range of observations. However, there is no
agreement in the scientific community today regarding best practices for
validation of extrapolative predictions made using computational models. The
purpose of this paper is to propose and explore a validation and predictive
assessment process that supports extrapolative predictions for models with
known sources of error. The process includes stochastic modeling, calibration,
validation, and predictive assessment phases where representations of known
sources of uncertainty and error are built, informed, and tested. The proposed
methodology is applied to an illustrative extrapolation problem involving a
misspecified nonlinear oscillator
E-infrastructures fostering multi-centre collaborative research into the intensive care management of patients with brain injury
Clinical research is becoming ever more collaborative with multi-centre trials now a common practice. With this in mind, never has it been more important to have secure access to data and, in so doing, tackle the challenges of inter-organisational data access and usage. This is especially the case for research conducted within the brain injury domain due to the complicated multi-trauma nature of the disease with its associated complex collation of time-series data of varying resolution and quality. It is now widely accepted that advances in treatment within this group of patients will only be delivered if the technical infrastructures underpinning the collection and validation of multi-centre research data for clinical trials is improved. In recognition of this need, IT-based multi-centre e-Infrastructures such as the Brain Monitoring with Information Technology group (BrainIT - www.brainit.org) and Cooperative Study on Brain Injury Depolarisations (COSBID - www.cosbid.de) have been formed. A serious impediment to the effective implementation of these networks is access to the know-how and experience needed to install, deploy and manage security-oriented middleware systems that provide secure access to distributed hospital based datasets and especially the linkage of these data sets across sites. The recently funded EU framework VII ICT project Advanced Arterial Hypotension Adverse Event prediction through a Novel Bayesian Neural Network (AVERT-IT) is focused upon tackling these challenges. This chapter describes the problems inherent to data collection within the brain injury medical domain, the current IT-based solutions designed to address these problems and how they perform in practice. We outline how the authors have collaborated towards developing Grid solutions to address the major technical issues. Towards this end we describe a prototype solution which ultimately formed the basis for the AVERT-IT project. We describe the design of the underlying Grid infrastructure for AVERT-IT and how it will be used to produce novel approaches to data collection, data validation and clinical trial design is also presented
Sharper and Simpler Nonlinear Interpolants for Program Verification
Interpolation of jointly infeasible predicates plays important roles in
various program verification techniques such as invariant synthesis and CEGAR.
Intrigued by the recent result by Dai et al.\ that combines real algebraic
geometry and SDP optimization in synthesis of polynomial interpolants, the
current paper contributes its enhancement that yields sharper and simpler
interpolants. The enhancement is made possible by: theoretical observations in
real algebraic geometry; and our continued fraction-based algorithm that rounds
off (potentially erroneous) numerical solutions of SDP solvers. Experiment
results support our tool's effectiveness; we also demonstrate the benefit of
sharp and simple interpolants in program verification examples
- …