61 research outputs found
Cross-entropy optimisation of importance sampling parameters for statistical model checking
Statistical model checking avoids the exponential growth of states associated
with probabilistic model checking by estimating properties from multiple
executions of a system and by giving results within confidence bounds. Rare
properties are often very important but pose a particular challenge for
simulation-based approaches, hence a key objective under these circumstances is
to reduce the number and length of simulations necessary to produce a given
level of confidence. Importance sampling is a well-established technique that
achieves this, however to maintain the advantages of statistical model checking
it is necessary to find good importance sampling distributions without
considering the entire state space.
Motivated by the above, we present a simple algorithm that uses the notion of
cross-entropy to find the optimal parameters for an importance sampling
distribution. In contrast to previous work, our algorithm uses a low
dimensional vector of parameters to define this distribution and thus avoids
the often intractable explicit representation of a transition matrix. We show
that our parametrisation leads to a unique optimum and can produce many orders
of magnitude improvement in simulation efficiency. We demonstrate the efficacy
of our methodology by applying it to models from reliability engineering and
biochemistry.Comment: 16 pages, 8 figures, LNCS styl
Scalable Verification of Markov Decision Processes
Markov decision processes (MDP) are useful to model concurrent process
optimisation problems, but verifying them with numerical methods is often
intractable. Existing approximative approaches do not scale well and are
limited to memoryless schedulers. Here we present the basis of scalable
verification for MDPSs, using an O(1) memory representation of
history-dependent schedulers. We thus facilitate scalable learning techniques
and the use of massively parallel verification.Comment: V4: FMDS version, 12 pages, 4 figure
Time-Staging Enhancement of Hybrid System Falsification
Optimization-based falsification employs stochastic optimization algorithms
to search for error input of hybrid systems. In this paper we introduce a
simple idea to enhance falsification, namely time staging, that allows the
time-causal structure of time-dependent signals to be exploited by the
optimizers. Time staging consists of running a falsification solver multiple
times, from one interval to another, incrementally constructing an input signal
candidate. Our experiments show that time staging can dramatically increase
performance in some realistic examples. We also present theoretical results
that suggest the kinds of models and specifications for which time staging is
likely to be effective
WiseMove: A Framework for Safe Deep Reinforcement Learning for Autonomous Driving
Machine learning can provide efficient solutions to the complex problems
encountered in autonomous driving, but ensuring their safety remains a
challenge. A number of authors have attempted to address this issue, but there
are few publicly-available tools to adequately explore the trade-offs between
functionality, scalability, and safety.
We thus present WiseMove, a software framework to investigate safe deep
reinforcement learning in the context of motion planning for autonomous
driving. WiseMove adopts a modular learning architecture that suits our current
research questions and can be adapted to new technologies and new questions. We
present the details of WiseMove, demonstrate its use on a common traffic
scenario, and describe how we use it in our ongoing safe learning research
Smart Sampling for Lightweight Verification of Markov Decision Processes
Markov decision processes (MDP) are useful to model optimisation problems in
concurrent systems. To verify MDPs with efficient Monte Carlo techniques
requires that their nondeterminism be resolved by a scheduler. Recent work has
introduced the elements of lightweight techniques to sample directly from
scheduler space, but finding optimal schedulers by simple sampling may be
inefficient. Here we describe "smart" sampling algorithms that can make
substantial improvements in performance.Comment: IEEE conference style, 11 pages, 5 algorithms, 11 figures, 1 tabl
Distributed Verification of Rare Properties using Importance Splitting Observers
Rare properties remain a challenge for statistical model checking (SMC) due
to the quadratic scaling of variance with rarity. We address this with a
variance reduction framework based on lightweight importance splitting
observers. These expose the model-property automaton to allow the construction
of score functions for high performance algorithms.
The confidence intervals defined for importance splitting make it appealing
for SMC, but optimising its performance in the standard way makes distribution
inefficient. We show how it is possible to achieve equivalently good results in
less time by distributing simpler algorithms. We first explore the challenges
posed by importance splitting and present an algorithm optimised for
distribution. We then define a specific bounded time logic that is compiled
into memory-efficient observers to monitor executions. Finally, we demonstrate
our framework on a number of challenging case studies
Communicating oscillatory networks: frequency domain analysis.
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.BACKGROUND: Constructing predictive dynamic models of interacting signalling networks remains one of the great challenges facing systems biology. While detailed dynamical data exists about individual pathways, the task of combining such data without further lengthy experimentation is highly nontrivial. The communicating links between pathways, implicitly assumed to be unimportant and thus excluded, are precisely what become important in the larger system and must be reinstated. To maintain the delicate phase relationships between signals, signalling networks demand accurate dynamical parameters, but parameters optimised in isolation and under varying conditions are unlikely to remain optimal when combined. The computational burden of estimating parameters increases exponentially with increasing system size, so it is crucial to find precise and efficient ways of measuring the behaviour of systems, in order to re-use existing work. RESULTS: Motivated by the above, we present a new frequency domain-based systematic analysis technique that attempts to address the challenge of network assembly by defining a rigorous means to quantify the behaviour of stochastic systems. As our focus we construct a novel coupled oscillatory model of p53, NF-kB and the mammalian cell cycle, based on recent experimentally verified mathematical models. Informed by online databases of protein networks and interactions, we distilled their key elements into simplified models containing the most significant parts. Having coupled these systems, we constructed stochastic models for use in our frequency domain analysis. We used our new technique to investigate the crosstalk between the components of our model and measure the efficacy of certain network-based heuristic measures. CONCLUSIONS: We find that the interactions between the networks we study are highly complex and not intuitive: (i) points of maximum perturbation do not necessarily correspond to points of maximum proximity to influence; (ii) increased coupling strength does not necessarily increase perturbation; (iii) different perturbations do not necessarily sum and (iv) overall, susceptibility to perturbation is amplitude and frequency dependent and cannot easily be predicted by heuristic measures.Our methodology is particularly relevant for oscillatory systems, though not limited to these, and is most revealing when applied to the results of stochastic simulation. The technique is able to characterise precisely the distance in behaviour between different models, different systems and different parts within the same system. It can also measure the difference between different simulation algorithms used on the same system and can be used to inform the choice of dynamic parameters. By measuring crosstalk between subsystems it can also indicate mechanisms by which such systems may be controlled in experiments and therapeutics. We have thus found our technique of frequency domain analysis to be a valuable benchmark systems-biological tool.Peer Reviewe
- …