19,630 research outputs found
Distributed Verification of Rare Properties using Importance Splitting Observers
Rare properties remain a challenge for statistical model checking (SMC) due
to the quadratic scaling of variance with rarity. We address this with a
variance reduction framework based on lightweight importance splitting
observers. These expose the model-property automaton to allow the construction
of score functions for high performance algorithms.
The confidence intervals defined for importance splitting make it appealing
for SMC, but optimising its performance in the standard way makes distribution
inefficient. We show how it is possible to achieve equivalently good results in
less time by distributing simpler algorithms. We first explore the challenges
posed by importance splitting and present an algorithm optimised for
distribution. We then define a specific bounded time logic that is compiled
into memory-efficient observers to monitor executions. Finally, we demonstrate
our framework on a number of challenging case studies
Verification of interlocking systems using statistical model checking
In the railway domain, an interlocking is the system ensuring safe train
traffic inside a station by controlling its active elements such as the signals
or points. Modern interlockings are configured using particular data, called
application data, reflecting the track layout and defining the actions that the
interlocking can take. The safety of the train traffic relies thereby on
application data correctness, errors inside them can cause safety issues such
as derailments or collisions. Given the high level of safety required by such a
system, its verification is a critical concern. In addition to the safety, an
interlocking must also ensure that availability properties, stating that no
train would be stopped forever in a station, are satisfied. Most of the
research dealing with this verification relies on model checking. However, due
to the state space explosion problem, this approach does not scale for large
stations. More recently, a discrete event simulation approach limiting the
verification to a set of likely scenarios, was proposed. The simulation enables
the verification of larger stations, but with no proof that all the interesting
scenarios are covered by the simulation. In this paper, we apply an
intermediate statistical model checking approach, offering both the advantages
of model checking and simulation. Even if exhaustiveness is not obtained,
statistical model checking evaluates with a parametrizable confidence the
reliability and the availability of the entire system.Comment: 12 pages, 3 figures, 2 table
A Statistical Model Checker for Nondeterminism and Rare Events
A great publication
Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study
Many automated system analysis techniques (e.g., model checking, model-based
testing) rely on first obtaining a model of the system under analysis. System
modeling is often done manually, which is often considered as a hindrance to
adopt model-based system analysis and development techniques. To overcome this
problem, researchers have proposed to automatically "learn" models based on
sample system executions and shown that the learned models can be useful
sometimes. There are however many questions to be answered. For instance, how
much shall we generalize from the observed samples and how fast would learning
converge? Or, would the analysis result based on the learned model be more
accurate than the estimation we could have obtained by sampling many system
executions within the same amount of time? In this work, we investigate
existing algorithms for learning probabilistic models for model checking,
propose an evolution-based approach for better controlling the degree of
generalization and conduct an empirical study in order to answer the questions.
One of our findings is that the effectiveness of learning may sometimes be
limited.Comment: 15 pages, plus 2 reference pages, accepted by FASE 2017 in ETAP
Automated Classification of Airborne Laser Scanning Point Clouds
Making sense of the physical world has always been at the core of mapping. Up
until recently, this has always dependent on using the human eye. Using
airborne lasers, it has become possible to quickly "see" more of the world in
many more dimensions. The resulting enormous point clouds serve as data sources
for applications far beyond the original mapping purposes ranging from flooding
protection and forestry to threat mitigation. In order to process these large
quantities of data, novel methods are required. In this contribution, we
develop models to automatically classify ground cover and soil types. Using the
logic of machine learning, we critically review the advantages of supervised
and unsupervised methods. Focusing on decision trees, we improve accuracy by
including beam vector components and using a genetic algorithm. We find that
our approach delivers consistently high quality classifications, surpassing
classical methods
- …