25,035 research outputs found
Tasks Makyth Models: Machine Learning Assisted Surrogates for Tipping Points
We present a machine learning (ML)-assisted framework bridging manifold
learning, neural networks, Gaussian processes, and Equation-Free multiscale
modeling, for (a) detecting tipping points in the emergent behavior of complex
systems, and (b) characterizing probabilities of rare events (here,
catastrophic shifts) near them. Our illustrative example is an event-driven,
stochastic agent-based model (ABM) describing the mimetic behavior of traders
in a simple financial market. Given high-dimensional spatiotemporal data --
generated by the stochastic ABM -- we construct reduced-order models for the
emergent dynamics at different scales: (a) mesoscopic Integro-Partial
Differential Equations (IPDEs); and (b) mean-field-type Stochastic Differential
Equations (SDEs) embedded in a low-dimensional latent space, targeted to the
neighborhood of the tipping point. We contrast the uses of the different models
and the effort involved in learning them.Comment: 29 pages, 8 figures, 6 table
Techniques for the Fast Simulation of Models of Highly dependable Systems
With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation
We consider a linear stochastic fluid network under Markov modulation, with a
focus on the probability that the joint storage level attains a value in a rare
set at a given point in time. The main objective is to develop efficient
importance sampling algorithms with provable performance guarantees. For linear
stochastic fluid networks without modulation, we prove that the number of runs
needed (so as to obtain an estimate with a given precision) increases
polynomially (whereas the probability under consideration decays essentially
exponentially); for networks operating in the slow modulation regime, our
algorithm is asymptotically efficient. Our techniques are in the tradition of
the rare-event simulation procedures that were developed for the sample-mean of
i.i.d. one-dimensional light-tailed random variables, and intensively use the
idea of exponential twisting. In passing, we also point out how to set up a
recursion to evaluate the (transient and stationary) moments of the joint
storage level in Markov-modulated linear stochastic fluid networks
Learning-Based Importance Sampling via Stochastic Optimal Control for Stochastic Reaction Networks
We explore efficient estimation of statistical quantities, particularly rare event probabilities, for stochastic reaction networks. Consequently, we propose an importance sampling (IS) approach to improve the Monte Carlo (MC) estimator efficiency based on an approximate tau-leap scheme. The crucial step in the IS framework is choosing an appropriate change of probability measure to achieve substantial variance reduction. This task is typically challenging and often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection in the stochastic reaction network context between finding optimal IS parameters within a class of probability measures and a stochastic optimal control formulation. Optimal IS parameters are obtained by solving a variance minimization problem. First, we derive an associated dynamic programming equation. Analytically solving this backward equation is challenging, hence we propose an approximate dynamic programming formulation to find near-optimal control parameters. To mitigate the curse of dimensionality, we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. Our analysis and numerical experiments verify that the proposed learning-based IS approach substantially reduces MC estimator variance, resulting in a lower computational complexity in the rare event regime, compared with standard tau-leap MC estimators
Probabilistic Reachability Analysis for Large Scale Stochastic Hybrid Systems
This paper studies probabilistic reachability analysis for large scale stochastic hybrid systems (SHS) as a problem of rare event estimation. In literature, advanced rare event estimation theory has recently been embedded within a stochastic analysis framework, and this has led to significant novel results in rare event estimation for a diffusion process using sequential MC simulation. This paper presents this rare event estimation theory directly in terms of probabilistic reachability analysis of an SHS, and develops novel theory which allows to extend the novel results for application to a large scale SHS where a very huge number of rare discrete modes may contribute significantly to the reach probability. Essentially, the approach taken is to introduce an aggregation of the discrete modes, and to develop importance sampling relative to the rare switching between the aggregation modes. The practical working of this approach is demonstrated for the safety verification of an advanced air traffic control example
Analysis of a Splitting Estimator for Rare Event Probabilities in Jackson Networks
We consider a standard splitting algorithm for the rare-event simulation of
overflow probabilities in any subset of stations in a Jackson network at level
n, starting at a fixed initial position. It was shown in DeanDup09 that a
subsolution to the Isaacs equation guarantees that a subexponential number of
function evaluations (in n) suffice to estimate such overflow probabilities
within a given relative accuracy. Our analysis here shows that in fact
O(n^{2{\beta}+1}) function evaluations suffice to achieve a given relative
precision, where {\beta} is the number of bottleneck stations in the network.
This is the first rigorous analysis that allows to favorably compare splitting
against directly computing the overflow probability of interest, which can be
evaluated by solving a linear system of equations with O(n^{d}) variables.Comment: 23 page
Rare events in networks with internal and external noise
We study rare events in networks with both internal and external noise, and
develop a general formalism for analyzing rare events that combines
pair-quenched techniques and large-deviation theory. The probability
distribution, shape, and time scale of rare events are considered in detail for
extinction in the Susceptible-Infected-Susceptible model as an illustration. We
find that when both types of noise are present, there is a crossover region as
the network size is increased, where the probability exponent for large
deviations no longer increases linearly with the network size. We demonstrate
that the form of the crossover depends on whether the endemic state is
localized near the epidemic threshold or not
- …