39,017 research outputs found

    A survey of rare event simulation methods for static input–output models

    Get PDF
    International audienceCrude Monte-Carlo or quasi Monte-Carlo methods are well suited to characterize events of which associated probabilities are not too low with respect to the simulation budget. For very seldom observed events, such as the collision probability between two aircraft in airspace, these approaches do not lead to accurate results. Indeed, the number of available samples is often insufficient to estimate such low probabilities (at least 10^6 samples are needed to estimate a probability of order 10^-4with 10% relative error with Monte-Carlo simulations). In this article, one reviewed different appropriate techniques to estimate rare event probabilities that require a fewer number of samples. These methods can be divided into four main categories: parameterization techniques of probability density function tails, simulation techniques such as importance sampling or importance splitting, geometric methods to approximate input failure space and finally, surrogate modeling. Each technique is detailed, its advantages and drawbacks are described and a synthesis that aims at giving some clues to the following question is given: “which technique to use for which problem?”

    Heterogeneous hierarchical workflow composition

    Get PDF
    Workflow systems promise scientists an automated end-to-end path from hypothesis to discovery. However, expecting any single workflow system to deliver such a wide range of capabilities is impractical. A more practical solution is to compose the end-to-end workflow from more than one system. With this goal in mind, the integration of task-based and in situ workflows is explored, where the result is a hierarchical heterogeneous workflow composed of subworkflows, with different levels of the hierarchy using different programming, execution, and data models. Materials science use cases demonstrate the advantages of such heterogeneous hierarchical workflow composition.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02- 06CH11357, program manager Laura Biven, and by the Spanish Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Dynamic general equilibrium analysis of improved weed management in Australia's winter cropping systems

    Get PDF
    A recent analysis indicated that the direct financial cost of weeds to Australia’s winter grain sectorwas approximately A1.2bnin19981999.Costsofthismagnituderepresentalargerecurringproductivitylossinanagriculturalsectorthatissufficienttoimpactsignificantlyonregionaleconomies.Usingamultiregionaldynamiccomputablegeneralequilibriummodel,wesimulatethegeneralequilibriumeffectsofahypotheticalsuccessfulcampaigntoreducetheeconomiccostsofweeds.WeassumethatanadditionalA1.2bn in 1998–1999. Costs of thismagnitude represent a large recurring productivity loss in an agricultural sector that is sufficient to impact significantly on regional economies.Using amulti-regional dynamic computable general equilibrium model, we simulate the general equilibrium effects of a hypothetical successful campaign to reduce the economic costs of weeds. We assume that an additional 50m of R&D spread over five years is targeted at reducing the additional costs and reduced yields arising from weeds in various broadacre crops. Following this R&D effort, one-tenth of the losses arising from weeds is temporarily eliminated, with a diminishing benefit in succeeding years. At the national level, there is a welfare increase of $700m in discounted net present value terms. The regions with relatively high concentrations of winter crops experience small temporary macroeconomic gains.CGE modelling, dynamics, weed management, Crop Production/Industries,

    Rare event simulation for dynamic fault trees

    Get PDF
    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of simulations required to obtain accurate estimations grows extremely large in the presence of rare events, i.e., events whose probability of occurrence is very low, which typically holds for failures in highly reliable systems. This paper presents a novel method for rare event simulation of dynamic fault trees with complex repairs that requires only a modest number of simulations, while retaining statistically justified confidence intervals. Our method exploits the importance sampling technique for rare event simulation, together with a compositional state space generation method for dynamic fault trees. We demonstrate our approach using two parameterized sets of case studies, showing that our method can handle fault trees that could not be evaluated with either existing analytical techniques, nor with standard simulation techniques
    corecore