41,367 research outputs found
Techniques for the Fast Simulation of Models of Highly dependable Systems
With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
Rare event simulation for highly dependable systems with fast repairs
Stochastic model checking has been used recently to assess, among others, dependability measures for a variety of systems. However, the employed numerical methods, as, e.g., supported by model checking tools such as PRISM and MRMC, suffer from the state-space explosion problem. The main alternative is statistical model checking, which uses standard simulation, but this performs poorly when small probabilities need to be estimated. Therefore, we propose a method based on importance sampling to speed up the simulation process in cases where the failure probabilities are small due to the high speed of the system's repair units. This setting arises naturally in Markovian models of highly dependable systems. We show that our method compares favourably to standard simulation, to existing importance sampling techniques and to the numerical techniques of PRISM
Scalable Approach to Uncertainty Quantification and Robust Design of Interconnected Dynamical Systems
Development of robust dynamical systems and networks such as autonomous
aircraft systems capable of accomplishing complex missions faces challenges due
to the dynamically evolving uncertainties coming from model uncertainties,
necessity to operate in a hostile cluttered urban environment, and the
distributed and dynamic nature of the communication and computation resources.
Model-based robust design is difficult because of the complexity of the hybrid
dynamic models including continuous vehicle dynamics, the discrete models of
computations and communications, and the size of the problem. We will overview
recent advances in methodology and tools to model, analyze, and design robust
autonomous aerospace systems operating in uncertain environment, with stress on
efficient uncertainty quantification and robust design using the case studies
of the mission including model-based target tracking and search, and trajectory
planning in uncertain urban environment. To show that the methodology is
generally applicable to uncertain dynamical systems, we will also show examples
of application of the new methods to efficient uncertainty quantification of
energy usage in buildings, and stability assessment of interconnected power
networks
Quantifying the Influence of Component Failure Probability on Cascading Blackout Risk
The risk of cascading blackouts greatly relies on failure probabilities of
individual components in power grids. To quantify how component failure
probabilities (CFP) influences blackout risk (BR), this paper proposes a
sample-induced semi-analytic approach to characterize the relationship between
CFP and BR. To this end, we first give a generic component failure probability
function (CoFPF) to describe CFP with varying parameters or forms. Then the
exact relationship between BR and CoFPFs is built on the abstract
Markov-sequence model of cascading outages. Leveraging a set of samples
generated by blackout simulations, we further establish a sample-induced
semi-analytic mapping between the unbiased estimation of BR and CoFPFs.
Finally, we derive an efficient algorithm that can directly calculate the
unbiased estimation of BR when the CoFPFs change. Since no additional
simulations are required, the algorithm is computationally scalable and
efficient. Numerical experiments well confirm the theory and the algorithm
Practical issues for the implementation of survivability and recovery techniques in optical networks
Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation
We consider a linear stochastic fluid network under Markov modulation, with a
focus on the probability that the joint storage level attains a value in a rare
set at a given point in time. The main objective is to develop efficient
importance sampling algorithms with provable performance guarantees. For linear
stochastic fluid networks without modulation, we prove that the number of runs
needed (so as to obtain an estimate with a given precision) increases
polynomially (whereas the probability under consideration decays essentially
exponentially); for networks operating in the slow modulation regime, our
algorithm is asymptotically efficient. Our techniques are in the tradition of
the rare-event simulation procedures that were developed for the sample-mean of
i.i.d. one-dimensional light-tailed random variables, and intensively use the
idea of exponential twisting. In passing, we also point out how to set up a
recursion to evaluate the (transient and stationary) moments of the joint
storage level in Markov-modulated linear stochastic fluid networks
- …