68 research outputs found
Dynamic Data-Driven Event Reconstruction for Atmospheric Releases
Accidental or terrorist releases of hazardous materials into the atmosphere can impact large populations and cause significant loss of life or property damage. Plume predictions have been shown to be extremely valuable in guiding an effective and timely response. The two greatest sources of uncertainty in the prediction of the consequences of hazardous atmospheric releases result from poorly characterized source terms and lack of knowledge about the state of the atmosphere as reflected in the available meteorological data. We have developed a new event reconstruction methodology that provides probabilistic source term estimates from field measurement data for both accidental and clandestine releases. Accurate plume dispersion prediction requires the following questions to be answered: What was released? When was it released? How much material was released? Where was it released? We have developed a dynamic-data-driven event reconstruction capability that couples data and predictive methods through Bayesian inference to obtain a solution to this inverse problem. The solution consists of a probability distribution of unknown source term parameters. For consequence assessment, we then use this probability distribution to construct a 'composite' forward plume prediction that accounts for the uncertainties in the source term. Since in most cases of practical significance it is impossible to find a closed form solution, Bayesian inference is accomplished by utilizing stochastic sampling methods. This approach takes into consideration both measurement and forward model errors and thus incorporates all the sources of uncertainty in the solution to the inverse problem. Stochastic sampling methods have the additional advantage of being suitable for problems characterized by a non-Gaussian distribution of source term parameters and for cases in which the underlying dynamical system is nonlinear. We initially developed a Markov Chain Monte Carlo (MCMC) stochastic methodology and demonstrated its effectiveness by reconstructing a wide range of release scenarios, using synthetic as well as real-world data. Data for evaluation of our event reconstruction capability were drawn from the short-range Prairie Grass, Copenhagen, and Joint Urban 2003 field experiments and a continental-scale real-world accidental release in Algeciras, Spain. The method was tested using a variety of forward models, including a Gaussian puff dispersion model INPUFF, the regional-to-continental scale Lagrangian dispersion model LODI (the work-horse real-time operational dispersion model used by the National Atmospheric Release Advisory Center), the empirical urban model UDM, and the building-scale computational fluid dynamics code FEM3MP. The robustness of the Bayesian methodology was demonstrated via the use of subsets of the available concentration data and by introducing error into some of the measurements (Fig. 1). These tests showed that the Bayesian approach is capable of providing reliable estimates of source characteristics even in cases of limited or significantly corrupted data. An example of an urban release scenario is shown in Fig. 2. For more effective treatment of strongly time-dependent problems, we developed a Sequential Monte Carlo (SMC) approach. To achieve the best performance under a wide range of conditions we combined SMC and MCMC sampling into a hybrid methodology. We compared the effectiveness and advantages of this approach relative to MCMC using a set of synthetic data examples. We created a modular, scalable computational framework to accommodate the full set of stochastic methodologies (e.g., MCMC, SMC, hybrid stochastic algorithms, 'Green's function', 'reciprocal' methods), as well as a selection of key classes of dispersion models. This design provides a clear separation of stochastic algorithms from predictive models and supports parallelization at both the stochastic algorithm and individual model level. In other words, it supports a parallel stochastic algorithm (e.g., SMC) that invokes parallel forward models. The framework is written in Python and utilizes pyMPI. It invokes forward models either through system calls or as shared objects. Our dynamic-data-driven event reconstruction capability seamlessly integrates observational data streams with predictive models, in order to provide the best possible estimates of unknown source-term parameters, as well as optimal and timely situation analyses consistent with both models and data. This new methodology is shown to be both flexible and robust, adaptable for use with any atmospheric dispersion model, and suitable for use in operational emergency response applications
Recommended from our members
Consequences of the Large-Scale Subsidence Rate on the Stably Stratified Atmospheric Boundary Layer Over the Arctic Ocean, as seen in Large-Eddy Simulations
The analysis of surface heat fluxes and sounding profiles from SHEBA indicated possible significant effects of subsidence on the structure of stably-stratified ABLs (Mirocha et al. 2005). In this study the influence of the large-scale subsidence rate on the stably stratified atmospheric boundary layer (ABL) over the Arctic Ocean during clear sky, winter conditions is investigated using a large-eddy simulation model. Simulations are conducted while varying the subsidence rate between 0, 0.001 and 0.002 ms{sup -1}, and the resulting quasi-equilibrium ABL structure and evolution are examined. Simulations conducted without subsidence yield ABLs that are deeper, more strongly mixed, and cool much more rapidly than were observed. The addition of a small subsidence rate significantly improves agreement between the simulations and observations regarding the ABL height, potential temperature profiles and bulk heating rates. Subsidence likewise alters the shapes of the surface-layer flux, stress and shear profiles, resulting in increased vertical transport of heat while decreasing vertical momentum transport. A brief discussion of the relevance of these results to parameterization of the stable ABL under subsiding conditions in large-scale numerical weather and climate prediction models is presented
Recommended from our members
Evolution of a storm-driven cloudy boundary layer in the Arctic
The cloudy boundary layer under stormy conditions during the summertime Arctic has been studied using observation from the SHEBA experiment and large-eddy simulations (LES). On 29 July 1998, a stable Arctic cloudy boundary layer event was observed after passage of a synoptic low. The local dynamic and thermodynamic structure of the boundary layer was determined from aircraft measurement including analysis of turbulence, cloud microphysics and radiative properties. After the upper cloud layer advected over the existing cloud layer, the turbulent kinetic energy budget indicated that the cloud layer below 200 m was maintained predominantly by shear production. Observations of longwave radiation showed that cloud top cooling at the lower cloud top has been suppressed by radiative effects of the upper cloud layer. Our LES results demonstrate the importance of the combination of shear mixing near the surface and radiative cooling at the cloud top in the storm-driven cloudy boundary layer. Once the low-level cloud reaches a certain height, depending on the amount of cloud-top cooling, the two sources of TKE production begin to separate in space under continuous stormy conditions, suggesting one possible mechanism for the cloud layering. The sensitivity tests suggest that the storm-driven cloudy boundary layer is flexibly switched to the shear-driven system due to the advection of upper clouds or the buoyantly driven system due to the lack of the wind shear. A comparison is made of this storm-driven boundary layer with the buoyantly driven boundary layer previously described in the literature
Recommended from our members
Synthetic Event Reconstruction Experiments for Defining Sensor Network Characteristics
An event reconstruction technology system has been designed and implemented at Lawrence Livermore National Laboratory (LLNL). This system integrates sensor observations, which may be sparse and/or conflicting, with transport and dispersion models via Bayesian stochastic sampling methodologies to characterize the sources of atmospheric releases of hazardous materials. We demonstrate the application of this event reconstruction technology system to designing sensor networks for detecting and responding to atmospheric releases of hazardous materials. The quantitative measure of the reduction in uncertainty, or benefit of a given network, can be utilized by policy makers to determine the cost/benefit of certain networks. Herein we present two numerical experiments demonstrating the utility of the event reconstruction methodology for sensor network design. In the first set of experiments, only the time resolution of the sensors varies between three candidate networks. The most ''expensive'' sensor network offers few advantages over the moderately-priced network for reconstructing the release examined here. The second set of experiments explores the significance of the sensors detection limit, which can have a significant impact on sensor cost. In this experiment, the expensive network can most clearly define the source location and source release rate. The other networks provide data insufficient for distinguishing between two possible clusters of source locations. When the reconstructions from all networks are aggregated into a composite plume, a decision-maker can distinguish the utility of the expensive sensor network
Crystal structure of 4-bromo-2-(1H-pyrazol-3-yl)phenol, C9H7BrN2O (vol 232, pg 507, 2017)
C9H7BrN2O, monoclinic, C2/c (no. 15), a = 16.255(3) angstrom, b = 4.4119(9) angstrom, c = 25.923(5) angstrom, beta = 107.99(3)degrees, V = 1768.2(7) angstrom(3), Z = 8, R-gt(F) = 0.0450, wR(ref)(F-2) = 0.0960, T = 150 K
Recommended from our members
Source Inversion for contaminant plume dispersion in urban environments using building-resolving simulations
Flow in urban environments is complicated by the presence of buildings, which divert the flow into often unexpected directions. Contaminants released at ground level are easily lofted above tall ({approx} 100 m) buildings and channeled through urban canyons that are perpendicular to the wind direction (see e.g., IOP 9 in Chan, 2005). The path of wind and scalars in urban environments is difficult to predict even with building-resolving computational fluid dynamics codes, due to the uncertainty in the synoptic wind and boundary conditions and other errors in the models. Given the difficulties due to the complexity of urban flows, solving an inverse problem becomes quite challenging. That is, given measurements of concentration at sensors scattered throughout a city, is it possible to detect the source of the contaminant? The ability to locate a source and determine its characteristics in a complex environment is necessary for emergency response for accidental or intentional releases of contaminants in densely-populated urban areas. The goal of this work is to demonstrate a robust statistical inversion procedure that performs well even under the complex flow conditions and uncertainty present in urban environments. Much work has previously focused on direct inversion procedures, where an inverse solution is obtained using an adjoint advection-diffusion equation. The exact direct inversion approaches are strictly limited to processes governed by linear equations. In addition, they assume the system is steady-state and that the equations are linear (Enting, 2002). In addition to adjoint models, optimization techniques are also employed to obtain solutions to inverse problems. These techniques often give only a single best answer, or assume a Gaussian distribution to account for uncertainties. General dispersion related inverse problems, however, often include non-linear processes (e.g., dispersion of chemically reacting substances) or are characterized by non-Gaussian probability distributions (Bennett, 2002). Traditional methods also have particular weaknesses for sparse, poorly constrained data problems, as well as in the case of high-volume, potentially over-constrained and diverse data streams. We have developed a more general and powerful inverse methodology based on Bayesian inference coupled with stochastic sampling. Bayesian methods reformulate the inverse problem into a solution based on efficient sampling of an ensemble of predictive simulations, guided by statistical comparisons with observed data. Predicted values from simulations are used to estimate the likelihoods of available measurements; these likelihoods in turn are used to improve the estimates of the unknown input parameters. Bayesian methods impose no restrictions on the types of models or data that can be used. Thus, highly non-linear systems and disparate types of concentration,meteorological and other data can be simultaneously incorporated into an analysis. In this work we have implemented stochastic models based on Markov Chain Monte Carlo sampling for use with a high-resolution building-resolving computational fluid dynamics code, FEM3MP. The inversion procedure is first applied to flow around an isolated building (a cube) and then to flow in Oklahoma City (OKC) using data collected during the Joint URBAN 2003 field experiment (Allwine, 2004). While we consider steady-state flows in this first demonstration, the approach used is entirely general and is also capable of dealing with unsteady, nonlinear governing equations
Convectively Induced Secondary Circulations in Fine-Grid Mesoscale Numerical Weather Prediction Models
Mesoscale numerical weather prediction models using fine-grid [O(1) km] meshes for weather forecasting, environmental assessment, and other applications capture aspects of larger-than-grid-mesh size, convectively induced secondary circulations (CISCs) such as cells and rolls that occur in the convective planetary boundary layer (PBL). However, 1-km grid spacing is too large for the simulation of the interaction of CISCs with smaller-scale turbulence. The existence of CISCs also violates the neglect of horizontal gradients of turbulent quantities in current PBL schemes. Both aspects—poorly resolved CISCs and a violation of the assumptions behind PBL schemes—are examples of what occurs in Wyngaard’s “terra incognita,” where horizontal grid spacing is comparable to the scale of the simulated motions. Thus, model CISCs (M-CISCs) cannot be simulated reliably. This paper describes how the superadiabatic layer in the lower convective PBL together with increased horizontal resolution allow the critical Rayleigh number to be exceeded and thus allow generation of M-CISCs like those in nature; and how the M-CISCs eventually neutralize the virtual temperature stratification, lowering the Rayleigh number and stopping their growth. Two options for removing M-CISCs while retaining their fluxes are 1) introducing nonlocal closure schemes for more effective removal of heat from the surface and 2) restricting the effective Rayleigh number to remain subcritical. It is demonstrated that CISCs are correctly handled by large-eddy simulation (LES) and thus may provide a way to improve representation of them or their effects. For some applications, it may suffice to allow M-CISCs to develop, but account for their shortcomings during interpretation
Recommended from our members
Adaptive Urban Dispersion Integrated Model
Numerical simulations represent a unique predictive tool for understanding the three-dimensional flow fields and associated concentration distributions from contaminant releases in complex urban settings (Britter and Hanna 2003). Utilization of the most accurate urban models, based on fully three-dimensional computational fluid dynamics (CFD) that solve the Navier-Stokes equations with incorporated turbulence models, presents many challenges. We address two in this work; first, a fast but accurate way to incorporate the complex urban terrain, buildings, and other structures to enforce proper boundary conditions in the flow solution; second, ways to achieve a level of computational efficiency that allows the models to be run in an automated fashion such that they may be used for emergency response and event reconstruction applications. We have developed a new integrated urban dispersion modeling capability based on FEM3MP (Gresho and Chan 1998, Chan and Stevens 2000), a CFD model from Lawrence Livermore National Lab. The integrated capability incorporates fast embedded boundary mesh generation for geometrically complex problems and full three-dimensional Cartesian adaptive mesh refinement (AMR). Parallel AMR and embedded boundary gridding support are provided through the SAMRAI library (Wissink et al. 2001, Hornung and Kohn 2002). Embedded boundary mesh generation has been demonstrated to be an automatic, fast, and efficient approach for problem setup. It has been used for a variety of geometrically complex applications, including urban applications (Pullen et al. 2005). The key technology we introduce in this work is the application of AMR, which allows the application of high-resolution modeling to certain important features, such as individual buildings and high-resolution terrain (including important vegetative and land-use features). It also allows the urban scale model to be readily interfaced with coarser resolution meso or regional scale models. This talk will discuss details of the approach and present results for some example calculations performed in Manhattan in support of the DHS Urban Dispersion Program (UDP) using some of the tools developed as part of this new capability
Recommended from our members
Event Reconstruction for Atmospheric Releases Employing Urban Puff Model UDM with Stochastic Inversion Methodology
The rapid identification of contaminant plume sources and their characteristics in urban environments can greatly enhance emergency response efforts. Source identification based on downwind concentration measurements is complicated by the presence of building obstacles that can cause flow diversion and entrainment. While high-resolution computational fluid dynamics (CFD) simulations are available for predicting plume evolution in complex urban geometries, such simulations require large computational effort. We make use of an urban puff model, the Defence Science Technology Laboratory's (Dstl) Urban Dispersion Model (UDM), which employs empirically based puff splitting techniques. UDM enables rapid urban dispersion simulations by combining traditional Gaussian puff modeling with empirically deduced mixing and entrainment approximations. Here we demonstrate the preliminary reconstruction of an atmospheric release event using stochastic sampling algorithms and Bayesian inference together with the rapid UDM urban puff model based on point measurements of concentration. We consider source inversions for both a prototype isolated building and for observations and flow conditions taken during the Joint URBAN 2003 field campaign at Oklahoma City. The Markov Chain Monte Carlo (MCMC) stochastic sampling method is used to determine likely source term parameters and considers both measurement and forward model errors. It should be noted that the stochastic methodology is general and can be used for time-varying release rates and flow conditions as well as nonlinear dispersion problems. The results of inversion indicate the probability of a source being at a particular location with a particular release rate. Uncertainty in observed data, or lack of sufficient data, is inherently reflected in the shape and size of the probability distribution of source term parameters. Although developed and used independently, source inversion with both UDM and a finite-element CFD code can be complementary in determining proper emergency response to an urban release. Ideally, the urban puff model is used to approximate the source location and strength. The more accurate CFD model can then be used to refine the solution
Recommended from our members
Simulating atmosphere flow for wind energy applications with WRF-LES
Forecasts of available wind energy resources at high spatial resolution enable users to site wind turbines in optimal locations, to forecast available resources for integration into power grids, to schedule maintenance on wind energy facilities, and to define design criteria for next-generation turbines. This array of research needs implies that an appropriate forecasting tool must be able to account for mesoscale processes like frontal passages, surface-atmosphere interactions inducing local-scale circulations, and the microscale effects of atmospheric stability such as breaking Kelvin-Helmholtz billows. This range of scales and processes demands a mesoscale model with large-eddy simulation (LES) capabilities which can also account for varying atmospheric stability. Numerical weather prediction models, such as the Weather and Research Forecasting model (WRF), excel at predicting synoptic and mesoscale phenomena. With grid spacings of less than 1 km (as is often required for wind energy applications), however, the limits of WRF's subfilter scale (SFS) turbulence parameterizations are exposed, and fundamental problems arise, associated with modeling the scales of motion between those which LES can represent and those for which large-scale PBL parameterizations apply. To address these issues, we have implemented significant modifications to the ARW core of the Weather Research and Forecasting model, including the Nonlinear Backscatter model with Anisotropy (NBA) SFS model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005).We are also modifying WRF's terrain-following coordinate system by implementing an immersed boundary method (IBM) approach to account for the effects of complex terrain. Companion papers presenting idealized simulations with NBA-RSFS-WRF (Mirocha et al.) and IBM-WRF (K. A. Lundquist et al.) are also presented. Observations of flow through the Altamont Pass (Northern California) wind farm are available for validation of the WRF modeling tool for wind energy applications. In this presentation, we use these data to evaluate simulations using the NBA-RSFS-WRF tool in multiple configurations. We vary nesting capabilities, multiple levels of RSFS reconstruction, SFS turbulence models (the new NBA turbulence model versus existing WRF SFS turbulence models) to illustrate the capabilities of the modeling tool and to prioritize recommendations for operational uses. Nested simulations which capture both significant mesoscale processes as well as local-scale stable boundary layer effects are required to effectively predict available wind resources at turbine height
- …