7,698 research outputs found
A primer on noise-induced transitions in applied dynamical systems
Noise plays a fundamental role in a wide variety of physical and biological
dynamical systems. It can arise from an external forcing or due to random
dynamics internal to the system. It is well established that even weak noise
can result in large behavioral changes such as transitions between or escapes
from quasi-stable states. These transitions can correspond to critical events
such as failures or extinctions that make them essential phenomena to
understand and quantify, despite the fact that their occurrence is rare. This
article will provide an overview of the theory underlying the dynamics of rare
events for stochastic models along with some example applications
Techniques for the Fast Simulation of Models of Highly dependable Systems
With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
Dynamic importance sampling for queueing networks
Importance sampling is a technique that is commonly used to speed up Monte
Carlo simulation of rare events. However, little is known regarding the design
of efficient importance sampling algorithms in the context of queueing
networks. The standard approach, which simulates the system using an a priori
fixed change of measure suggested by large deviation analysis, has been shown
to fail in even the simplest network setting (e.g., a two-node tandem network).
Exploiting connections between importance sampling, differential games, and
classical subsolutions of the corresponding Isaacs equation, we show how to
design and analyze simple and efficient dynamic importance sampling schemes for
general classes of networks. The models used to illustrate the approach include
-node tandem Jackson networks and a two-node network with feedback, and the
rare events studied are those of large queueing backlogs, including total
population overflow and the overflow of individual buffers.Comment: Published in at http://dx.doi.org/10.1214/105051607000000122 the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Numerical computation of rare events via large deviation theory
An overview of rare events algorithms based on large deviation theory (LDT)
is presented. It covers a range of numerical schemes to compute the large
deviation minimizer in various setups, and discusses best practices, common
pitfalls, and implementation trade-offs. Generalizations, extensions, and
improvements of the minimum action methods are proposed. These algorithms are
tested on example problems which illustrate several common difficulties which
arise e.g. when the forcing is degenerate or multiplicative, or the systems are
infinite-dimensional. Generalizations to processes driven by non-Gaussian
noises or random initial data and parameters are also discussed, along with the
connection between the LDT-based approach reviewed here and other methods, such
as stochastic field theory and optimal control. Finally, the integration of
this approach in importance sampling methods using e.g. genealogical algorithms
is explored
The instanton method and its numerical implementation in fluid mechanics
A precise characterization of structures occurring in turbulent fluid flows
at high Reynolds numbers is one of the last open problems of classical physics.
In this review we discuss recent developments related to the application of
instanton methods to turbulence. Instantons are saddle point configurations of
the underlying path integrals. They are equivalent to minimizers of the related
Freidlin-Wentzell action and known to be able to characterize rare events in
such systems. While there is an impressive body of work concerning their
analytical description, this review focuses on the question on how to compute
these minimizers numerically. In a short introduction we present the relevant
mathematical and physical background before we discuss the stochastic Burgers
equation in detail. We present algorithms to compute instantons numerically by
an efficient solution of the corresponding Euler-Lagrange equations. A second
focus is the discussion of a recently developed numerical filtering technique
that allows to extract instantons from direct numerical simulations. In the
following we present modifications of the algorithms to make them efficient
when applied to two- or three-dimensional fluid dynamical problems. We
illustrate these ideas using the two-dimensional Burgers equation and the
three-dimensional Navier-Stokes equations
Constrained Approximation of Effective Generators for Multiscale Stochastic Reaction Networks and Application to Conditioned Path Sampling
Efficient analysis and simulation of multiscale stochastic systems of
chemical kinetics is an ongoing area for research, and is the source of many
theoretical and computational challenges. In this paper, we present a
significant improvement to the constrained approach, which is a method for
computing effective dynamics of slowly changing quantities in these systems,
but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA
can cause errors in the estimation of effective dynamics for systems where the
difference in timescales between the "fast" and "slow" variables is not so
pronounced.
This new application of the constrained approach allows us to compute the
effective generator of the slow variables, without the need for expensive
stochastic simulations. This is achieved by finding the null space of the
generator of the constrained system. For complex systems where this is not
possible, or where the constrained subsystem is itself multiscale, the
constrained approach can then be applied iteratively. This results in breaking
the problem down into finding the solutions to many small eigenvalue problems,
which can be efficiently solved using standard methods.
Since this methodology does not rely on the quasi steady-state assumption,
the effective dynamics that are approximated are highly accurate, and in the
case of systems with only monomolecular reactions, are exact. We will
demonstrate this with some numerics, and also use the effective generators to
sample paths of the slow variables which are conditioned on their endpoints, a
task which would be computationally intractable for the generator of the full
system.Comment: 31 pages, 7 figure
Asymptotic optimality of the cross-entropy method for Markov chain problems
The correspondence between the cross-entropy method and the zero-variance
approximation to simulate a rare event problem in Markov chains is shown. This
leads to a sufficient condition that the cross-entropy estimator is
asymptotically optimal.Comment: 13 pager; 3 figure
A stochastic spectral analysis of transcriptional regulatory cascades
The past decade has seen great advances in our understanding of the role of
noise in gene regulation and the physical limits to signaling in biological
networks. Here we introduce the spectral method for computation of the joint
probability distribution over all species in a biological network. The spectral
method exploits the natural eigenfunctions of the master equation of
birth-death processes to solve for the joint distribution of modules within the
network, which then inform each other and facilitate calculation of the entire
joint distribution. We illustrate the method on a ubiquitous case in nature:
linear regulatory cascades. The efficiency of the method makes possible
numerical optimization of the input and regulatory parameters, revealing design
properties of, e.g., the most informative cascades. We find, for threshold
regulation, that a cascade of strong regulations converts a unimodal input to a
bimodal output, that multimodal inputs are no more informative than bimodal
inputs, and that a chain of up-regulations outperforms a chain of
down-regulations. We anticipate that this numerical approach may be useful for
modeling noise in a variety of small network topologies in biology
- …