65,160 research outputs found
Distributed Verification of Rare Properties using Importance Splitting Observers
Rare properties remain a challenge for statistical model checking (SMC) due
to the quadratic scaling of variance with rarity. We address this with a
variance reduction framework based on lightweight importance splitting
observers. These expose the model-property automaton to allow the construction
of score functions for high performance algorithms.
The confidence intervals defined for importance splitting make it appealing
for SMC, but optimising its performance in the standard way makes distribution
inefficient. We show how it is possible to achieve equivalently good results in
less time by distributing simpler algorithms. We first explore the challenges
posed by importance splitting and present an algorithm optimised for
distribution. We then define a specific bounded time logic that is compiled
into memory-efficient observers to monitor executions. Finally, we demonstrate
our framework on a number of challenging case studies
Vapor nucleation paths in lyophobic nanopores
Abstract.: In recent years, technologies revolving around the use of lyophobic nanopores gained considerable attention in both fundamental and applied research. Owing to the enormous internal surface area, heterogeneous lyophobic systems (HLS), constituted by a nanoporous lyophobic material and a non-wetting liquid, are promising candidates for the efficient storage or dissipation of mechanical energy. These diverse applications both rely on the forced intrusion and extrusion of the non-wetting liquid inside the pores; the behavior of HLS for storage or dissipation depends on the hysteresis between these two processes, which, in turn, are determined by the microscopic details of the system. It is easy to understand that molecular simulations provide an unmatched tool for understanding phenomena at these scales. In this contribution we use advanced atomistic simulation techniques in order to study the nucleation of vapor bubbles inside lyophobic mesopores. The use of the string method in collective variables allows us to overcome the computational challenges associated with the activated nature of the phenomenon, rendering a detailed picture of nucleation in confinement. In particular, this rare event method efficiently searches for the most probable nucleation path(s) in otherwise intractable, high-dimensional free-energy landscapes. Results reveal the existence of several independent nucleation paths associated with different free-energy barriers. In particular, there is a family of asymmetric transition paths, in which a bubble forms at one of the walls; the other family involves the formation of axisymmetric bubbles with an annulus shape. The computed free-energy profiles reveal that the asymmetric path is significantly more probable than the symmetric one, while the exact position where the asymmetric bubble forms is less relevant for the free energetics of the process. A comparison of the atomistic results with continuum models is also presented, showing how, for simple liquids in mesoporous materials of characteristic size of ca. 4nm, the nanoscale effects reported for smaller pores have a minor role. The atomistic estimates for the nucleation free-energy barrier are in qualitative accord with those that can be obtained using a macroscopic, capillary-based nucleation theory. Graphical abstract: [Figure not available: see fulltext.]
Approximate Bayesian Computation by Subset Simulation
A new Approximate Bayesian Computation (ABC) algorithm for Bayesian updating
of model parameters is proposed in this paper, which combines the ABC
principles with the technique of Subset Simulation for efficient rare-event
simulation, first developed in S.K. Au and J.L. Beck [1]. It has been named
ABC- SubSim. The idea is to choose the nested decreasing sequence of regions in
Subset Simulation as the regions that correspond to increasingly closer
approximations of the actual data vector in observation space. The efficiency
of the algorithm is demonstrated in two examples that illustrate some of the
challenges faced in real-world applications of ABC. We show that the proposed
algorithm outperforms other recent sequential ABC algorithms in terms of
computational efficiency while achieving the same, or better, measure of ac-
curacy in the posterior distribution. We also show that ABC-SubSim readily
provides an estimate of the evidence (marginal likelihood) for posterior model
class assessment, as a by-product
Statistical Model Checking : An Overview
Quantitative properties of stochastic systems are usually specified in logics
that allow one to compare the measure of executions satisfying certain temporal
properties with thresholds. The model checking problem for stochastic systems
with respect to such logics is typically solved by a numerical approach that
iteratively computes (or approximates) the exact measure of paths satisfying
relevant subformulas; the algorithms themselves depend on the class of systems
being analyzed as well as the logic used for specifying the properties. Another
approach to solve the model checking problem is to \emph{simulate} the system
for finitely many runs, and use \emph{hypothesis testing} to infer whether the
samples provide a \emph{statistical} evidence for the satisfaction or violation
of the specification. In this short paper, we survey the statistical approach,
and outline its main advantages in terms of efficiency, uniformity, and
simplicity.Comment: non
ASCR/HEP Exascale Requirements Review Report
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
- …