12 research outputs found

    Efficient calibration for high-dimensional computer model output using basis methods

    Full text link
    Calibration of expensive computer models with high-dimensional output fields can be approached via history matching. If the entire output field is matched, with patterns or correlations between locations or time points represented, calculating the distance metric between observational data and model output for a single input setting requires a time intensive inversion of a high-dimensional matrix. By using a low-dimensional basis representation rather than emulating each output individually, we define a metric in the reduced space that allows the implausibility for the field to be calculated efficiently, with only small matrix inversions required, using projection that is consistent with the variance specifications in the implausibility. We show that projection using the L2L_2 norm can result in different conclusions, with the ordering of points not maintained on the basis, with implications for both history matching and probabilistic methods. We demonstrate the scalability of our method through history matching of the Canadian atmosphere model, CanAM4, comparing basis methods to emulation of each output individually, showing that the basis approach can be more accurate, whilst also being more efficient

    Pooling strength amongst limited datasets using hierarchical Bayesian analysis, with application to pyroclastic density current mobility metrics

    Get PDF
    In volcanology, the sparsity of datasets for individual volcanoes is an important problem, which, in many cases, compromises our ability to make robust judgments about future volcanic hazards. In this contribution we develop a method for using hierarchical Bayesian analysis of global datasets to combine information across different volcanoes and to thereby improve our knowledge at individual volcanoes. The method is applied to the assessment of mobility metrics for pyroclastic density currents in order to better constrain input parameters and their related uncertainties for forward modeling. Mitigation of risk associated with such flows depends upon accurate forecasting of possible inundation areas, often using empirical models that rely on mobility metrics measured from the deposits of past flows, or on the application of computational models, several of which take mobility metrics, either directly or indirectly, as input parameters. We use hierarchical Bayesian modeling to leverage the global record of mobility metrics from the FlowDat database, leading to considerable improvement in the assessment of flow mobility where the data for a particular volcano is sparse. We estimate the uncertainties involved and demonstrate how they are improved through this approach. The method has broad applicability across other areas of volcanology where relationships established from broader datasets can be used to better constrain more specific, sparser, datasets. Employing such methods allows us to use, rather than shy away from, limited datasets, and allows for transparency with regard to uncertainties, enabling more accountable decision-making

    Counterfactual Analysis of Runaway Volcanic Explosions

    Get PDF

    On the statistical formalism of uncertainty quantification

    Get PDF
    The use of models to try to better understand reality is ubiquitous. Models have proven useful in testing our current understanding of reality; for instance, climate models of the 1980s were built for science discovery, to achieve a better understanding of the general dynamics of climate systems. Scientific insights often take the form of general qualitative predictions (i.e., “under these conditions, the Earth’s poles will warm more than the rest of the planet”); such use of models differs from making quantitative forecasts of specific events (i.e. “high winds at noon tomorrow at London’s Heathrow Airport”). It is sometimes hoped that, after sufficient model development, any model can be used to make quantitative forecasts for any target system. Even if that were the case, there would always be some uncertainty in the prediction. Uncertainty quantification aims to provide a framework within which that uncertainty can be discussed and, ideally, quantified, in a manner relevant to practitioners using the forecast system. A statistical formalism has developed that claims to be able to accurately assess the uncertainty in prediction. This article is a discussion of if and when this formalism can do so. The article arose from an ongoing discussion between the authors concerning this issue, the second author generally being considerably more skeptical concerning the utility of the formalism in providing quantitative decision-relevant information

    Probabilistic forecasting of plausible debris flows from Nevado de Colima (Mexico) using data from the Atenquique debris flow, 1955

    Get PDF
    We detail a new prediction-oriented procedure aimed at volcanic hazard assessment based on geophysical mass flow models constrained with heterogeneous and poorly defined data. Our method relies on an itemized application of the empirical falsification principle over an arbitrarily wide envelope of possible input conditions. We thus provide a first step towards a objective and partially automated experimental design construction. In particular, instead of fully calibrating model inputs on past observations, we create and explore more general requirements of consistency, and then we separately use each piece of empirical data to remove those input values that are not compatible with it. Hence, partial solutions are defined to the inverse problem. This has several advantages compared to a traditionally posed inverse problem: (i) the potentially nonempty inverse images of partial solutions of multiple possible forward models characterize the solutions to the inverse problem; (ii) the partial solutions can provide hazard estimates under weaker constraints, potentially including extreme cases that are important for hazard analysis; (iii) if multiple models are applicable, specific performance scores against each piece of empirical information can be calculated. We apply our procedure to the case study of the Atenquique volcaniclastic debris flow, which occurred on the flanks of Nevado de Colima volcano (Mexico), 1955. We adopt and compare three depth-averaged models currently implemented in the TITAN2D solver, available from https://vhub.org (Version 4.0.0 – last access: 23 June 2016). The associated inverse problem is not well-posed if approached in a traditional way. We show that our procedure can extract valuable information for hazard assessment, allowing the exploration of the impact of synthetic flows that are similar to those that occurred in the past but different in plausible ways. The implementation of multiple models is thus a crucial aspect of our approach, as they can allow the covering of other plausible flows. We also observe that model selection is inherently linked to the inversion problem.</p

    A Framework for Probabilistic Multi-Hazard Assessment of Rain-Triggered Lahars Using Bayesian Belief Networks

    Get PDF
    Volcanic water-sediment flows, commonly known as lahars, can often pose a higher threat to population and infrastructure than primary volcanic hazardous processes such as tephra fallout and Pyroclastic Density Currents (PDCs). Lahars are volcaniclastic flows of water, volcanic debris and entrained sediments that can travel long distances from their source, causing severe damage by impact and burial. Lahars are frequently triggered by intense or prolonged rainfall occurring after explosive eruptions, and their occurrence depends on numerous factors including the spatio-temporal rainfall characteristics, the spatial distribution and hydraulic properties of the tephra deposit, and the pre- and post-eruption topography. Modeling (and forecasting) such a complex system requires the quantification of aleatory variability in the lahar triggering and propagation. To fulfill this goal, we develop a novel framework for probabilistic hazard assessment of lahars within a multi-hazard environment, based on coupling a versatile probabilistic model for lahar triggering (a Bayesian Belief Network: Multihaz) with a dynamic physical model for lahar propagation (LaharFlow). Multihaz allows us to estimate the probability of lahars of different volumes occurring by merging varied information about regional rainfall, scientific knowledge on lahar triggering mechanisms and, crucially, probabilistic assessment of available pyroclastic material from tephra fallout and PDCs. LaharFlow propagates the aleatory variability modeled by Multihaz into hazard footprints of lahars. We apply our framework to Somma-Vesuvius (Italy) because: (1) the volcano is strongly lahar-prone based on its previous activity, (2) there are many possible source areas for lahars, and (3) there is high density of population nearby. Our results indicate that the size of the eruption preceding the lahar occurrence and the spatial distribution of tephra accumulation have a paramount role in the lahar initiation and potential impact. For instance, lahars with initiation volume ≥105 m3 along the volcano flanks are almost 60% probable to occur after large-sized eruptions (~VEI ≥ 5) but 40% after medium-sized eruptions (~VEI4). Some simulated lahars can propagate for 15 km or reach combined flow depths of 2 m and speeds of 5–10 m/s, even over flat terrain. Probabilistic multi-hazard frameworks like the one presented here can be invaluable for volcanic hazard assessment worldwide

    A modular framework for the development of multi-hazard, multi-phase volcanic eruption scenario suites

    Get PDF
    Understanding future volcanic eruptions and their potential impact is a critical component of disaster risk reduction, and necessitates the production of salient, robust hazard information for decision-makers and endusers. Volcanic eruptions are inherently multi-phase, multi-hazard events, and the uncertainty and complexity surrounding potential future hazard behaviour is exceedingly hard to communicate to decision-makers. Volcanic eruption scenarios are recognised to be an effective knowledge-sharing mechanism between scientists and practitioners, and recent hybrid scenario suites partially address the limitations surrounding the traditional deterministic scenario approach. Despite advances in scenario suite development, there is still a gap in the international knowledge base concerning the synthesis of multi-phase, multi-hazard volcano science and end-user needs. In this study we present a new modular framework for the development of complex, long-duration, multiphase, multi-hazard volcanic eruption scenario suites. The framework was developed in collaboration with volcanic risk management agencies and researchers in Aotearoa-New Zealand, and is applied to Taranaki Mounga volcano, an area of high volcanic risk. This collaborative process aimed to meet end-user requirements, as well as the need for scientific rigour. This new scenario framework development process could be applied at other volcanic settings to produce robust, credible and relevant scenario suites that are demonstrative of the complex, varying-duration and multi-hazard nature of volcanic eruptions. In addressing this gap, the value of volcanic scenario development is enhanced by advancing multi-hazard assessment capabilities and cross-sector collaboration between scientists and practitioners for disaster risk reduction planning

    Automating Emulator Construction for Geophysical Hazard Maps

    No full text
    corecore