20,153 research outputs found
Techniques for the Fast Simulation of Models of Highly dependable Systems
With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
A Posteriori Probabilistic Bounds of Convex Scenario Programs with Validation Tests
Scenario programs have established themselves as efficient tools towards
decision-making under uncertainty. To assess the quality of scenario-based
solutions a posteriori, validation tests based on Bernoulli trials have been
widely adopted in practice. However, to reach a theoretically reliable
judgement of risk, one typically needs to collect massive validation samples.
In this work, we propose new a posteriori bounds for convex scenario programs
with validation tests, which are dependent on both realizations of support
constraints and performance on out-of-sample validation data. The proposed
bounds enjoy wide generality in that many existing theoretical results can be
incorporated as particular cases. To facilitate practical use, a systematic
approach for parameterizing a posteriori probability bounds is also developed,
which is shown to possess a variety of desirable properties allowing for easy
implementations and clear interpretations. By synthesizing comprehensive
information about support constraints and validation tests, improved risk
evaluation can be achieved for randomized solutions in comparison with existing
a posteriori bounds. Case studies on controller design of aircraft lateral
motion are presented to validate the effectiveness of the proposed a posteriori
bounds
mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location
This paper develops mfEGRA, a multifidelity active learning method using
data-driven adaptively refined surrogates for failure boundary location in
reliability analysis. This work addresses the issue of prohibitive cost of
reliability analysis using Monte Carlo sampling for expensive-to-evaluate
high-fidelity models by using cheaper-to-evaluate approximations of the
high-fidelity model. The method builds on the Efficient Global Reliability
Analysis (EGRA) method, which is a surrogate-based method that uses adaptive
sampling for refining Gaussian process surrogates for failure boundary location
using a single-fidelity model. Our method introduces a two-stage adaptive
sampling criterion that uses a multifidelity Gaussian process surrogate to
leverage multiple information sources with different fidelities. The method
combines expected feasibility criterion from EGRA with one-step lookahead
information gain to refine the surrogate around the failure boundary. The
computational savings from mfEGRA depends on the discrepancy between the
different models, and the relative cost of evaluating the different models as
compared to the high-fidelity model. We show that accurate estimation of
reliability using mfEGRA leads to computational savings of 46% for an
analytic multimodal test problem and 24% for a three-dimensional acoustic horn
problem, when compared to single-fidelity EGRA. We also show the effect of
using a priori drawn Monte Carlo samples in the implementation for the acoustic
horn problem, where mfEGRA leads to computational savings of 45% for the
three-dimensional case and 48% for a rarer event four-dimensional case as
compared to single-fidelity EGRA
Racing Multi-Objective Selection Probabilities
In the context of Noisy Multi-Objective Optimization, dealing with
uncertainties requires the decision maker to define some preferences about how
to handle them, through some statistics (e.g., mean, median) to be used to
evaluate the qualities of the solutions, and define the corresponding Pareto
set. Approximating these statistics requires repeated samplings of the
population, drastically increasing the overall computational cost. To tackle
this issue, this paper proposes to directly estimate the probability of each
individual to be selected, using some Hoeffding races to dynamically assign the
estimation budget during the selection step. The proposed racing approach is
validated against static budget approaches with NSGA-II on noisy versions of
the ZDT benchmark functions
Testing for Differences in Gaussian Graphical Models: Applications to Brain Connectivity
Functional brain networks are well described and estimated from data with
Gaussian Graphical Models (GGMs), e.g. using sparse inverse covariance
estimators. Comparing functional connectivity of subjects in two populations
calls for comparing these estimated GGMs. Our goal is to identify differences
in GGMs known to have similar structure. We characterize the uncertainty of
differences with confidence intervals obtained using a parametric distribution
on parameters of a sparse estimator. Sparse penalties enable statistical
guarantees and interpretable models even in high-dimensional and low-sample
settings. Characterizing the distributions of sparse models is inherently
challenging as the penalties produce a biased estimator. Recent work invokes
the sparsity assumptions to effectively remove the bias from a sparse estimator
such as the lasso. These distributions can be used to give confidence intervals
on edges in GGMs, and by extension their differences. However, in the case of
comparing GGMs, these estimators do not make use of any assumed joint structure
among the GGMs. Inspired by priors from brain functional connectivity we derive
the distribution of parameter differences under a joint penalty when parameters
are known to be sparse in the difference. This leads us to introduce the
debiased multi-task fused lasso, whose distribution can be characterized in an
efficient manner. We then show how the debiased lasso and multi-task fused
lasso can be used to obtain confidence intervals on edge differences in GGMs.
We validate the techniques proposed on a set of synthetic examples as well as
neuro-imaging dataset created for the study of autism
- …