46,680 research outputs found
mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location
This paper develops mfEGRA, a multifidelity active learning method using
data-driven adaptively refined surrogates for failure boundary location in
reliability analysis. This work addresses the issue of prohibitive cost of
reliability analysis using Monte Carlo sampling for expensive-to-evaluate
high-fidelity models by using cheaper-to-evaluate approximations of the
high-fidelity model. The method builds on the Efficient Global Reliability
Analysis (EGRA) method, which is a surrogate-based method that uses adaptive
sampling for refining Gaussian process surrogates for failure boundary location
using a single-fidelity model. Our method introduces a two-stage adaptive
sampling criterion that uses a multifidelity Gaussian process surrogate to
leverage multiple information sources with different fidelities. The method
combines expected feasibility criterion from EGRA with one-step lookahead
information gain to refine the surrogate around the failure boundary. The
computational savings from mfEGRA depends on the discrepancy between the
different models, and the relative cost of evaluating the different models as
compared to the high-fidelity model. We show that accurate estimation of
reliability using mfEGRA leads to computational savings of 46% for an
analytic multimodal test problem and 24% for a three-dimensional acoustic horn
problem, when compared to single-fidelity EGRA. We also show the effect of
using a priori drawn Monte Carlo samples in the implementation for the acoustic
horn problem, where mfEGRA leads to computational savings of 45% for the
three-dimensional case and 48% for a rarer event four-dimensional case as
compared to single-fidelity EGRA
Cross-entropy optimisation of importance sampling parameters for statistical model checking
Statistical model checking avoids the exponential growth of states associated
with probabilistic model checking by estimating properties from multiple
executions of a system and by giving results within confidence bounds. Rare
properties are often very important but pose a particular challenge for
simulation-based approaches, hence a key objective under these circumstances is
to reduce the number and length of simulations necessary to produce a given
level of confidence. Importance sampling is a well-established technique that
achieves this, however to maintain the advantages of statistical model checking
it is necessary to find good importance sampling distributions without
considering the entire state space.
Motivated by the above, we present a simple algorithm that uses the notion of
cross-entropy to find the optimal parameters for an importance sampling
distribution. In contrast to previous work, our algorithm uses a low
dimensional vector of parameters to define this distribution and thus avoids
the often intractable explicit representation of a transition matrix. We show
that our parametrisation leads to a unique optimum and can produce many orders
of magnitude improvement in simulation efficiency. We demonstrate the efficacy
of our methodology by applying it to models from reliability engineering and
biochemistry.Comment: 16 pages, 8 figures, LNCS styl
Sequential Design for Optimal Stopping Problems
We propose a new approach to solve optimal stopping problems via simulation.
Working within the backward dynamic programming/Snell envelope framework, we
augment the methodology of Longstaff-Schwartz that focuses on approximating the
stopping strategy. Namely, we introduce adaptive generation of the stochastic
grids anchoring the simulated sample paths of the underlying state process.
This allows for active learning of the classifiers partitioning the state space
into the continuation and stopping regions. To this end, we examine sequential
design schemes that adaptively place new design points close to the stopping
boundaries. We then discuss dynamic regression algorithms that can implement
such recursive estimation and local refinement of the classifiers. The new
algorithm is illustrated with a variety of numerical experiments, showing that
an order of magnitude savings in terms of design size can be achieved. We also
compare with existing benchmarks in the context of pricing multi-dimensional
Bermudan options.Comment: 24 page
Improving the efficiency of the detection of gravitational wave signals from inspiraling compact binaries: Chebyshev interpolation
Inspiraling compact binaries are promising sources of gravitational waves for
ground and space-based laser interferometric detectors. The time-dependent
signature of these sources in the detectors is a well-characterized function of
a relatively small number of parameters; thus, the favored analysis technique
makes use of matched filtering and maximum likelihood methods. Current analysis
methodology samples the matched filter output at parameter values chosen so
that the correlation between successive samples is 97% for which the filtered
output is closely correlated. Here we describe a straightforward and practical
way of using interpolation to take advantage of the correlation between the
matched filter output associated with nearby points in the parameter space to
significantly reduce the number of matched filter evaluations without
sacrificing the efficiency with which real signals are recognized. Because the
computational cost of the analysis is driven almost exclusively by the matched
filter evaluations, this translates directly into an increase in computational
efficiency, which in turn, translates into an increase in the size of the
parameter space that can be analyzed and, thus, the science that can be
accomplished with the data. As a demonstration we compare the present "dense
sampling" analysis methodology with our proposed "interpolation" methodology,
restricted to one dimension of the multi-dimensional analysis problem. We find
that the interpolated search reduces by 25% the number of filter evaluations
required by the dense search with 97% correlation to achieve the same
efficiency of detection for an expected false alarm probability. Generalized to
higher dimensional space of a generic binary including spins suggests an order
of magnitude increase in computational efficiency.Comment: 23 pages, 5 figures, submitted to Phys. Rev.
- …