8 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationThe detonation of hundreds of explosive devices from either a transportation or storage accident is an extremely dangerous event. Motivation for this work came from a transportation accident where a truck carrying 16,000 kg of seismic boosters overturned, caught fire and detonated. The damage was catastrophic, creating a crater 24 m wide by 10 m deep in the middle of the highway. Our particular interest is understanding the fundamental physical mechanisms by which convective deflagration of cylindrical PBX-9501 devices can transition to a fully-developed detonation in transportation and storage accidents. Predictive computer simulations of large-scale deflagrations and detonations are dependent on the availability of robust reaction models embedded in a computational framework capable of running on massively parallel computer architectures. Our research group has been developing such models in the Uintah Computational Framework, which is capable of scaling up to 512 K cores. The current Deflagration to Detonation Transition (DDT) model merges a combustion model from Ward, Son, and Brewster that captures the effects of pressure and initial temperature on the burn rate, with a criteria model for burning in cracks of damaged explosives from Berghout et al., and a detonation model from Souers describing fully developed detonation. The prior extensive validation against experimental tests was extended to a wide range of temporal and spatial scales. We made changes to the reactant equation of state-enabling predictions of combustions, explosions, and detonations over a range of pressures spanning five orders of magnitude. A resolution dependence was eliminated from the reaction model facilitating large scale simulations to be run at a resolution of 2 mm without loss of fidelity. Adjustments were also made to slow down the flame propagation of conductive and convective deflagration. Large two- and three-dimensional simulations revealed two dominant mechanisms for the initiation of a DDT, inertial confinement and Impact to Detonation Transition. Understanding these mechanisms led to identifying ways to package and store explosive devices that reduced the probability of a detonation. We determined that the arrangement of the explosive cylinders and the number of devices packed in a box greatly affected the propensity to transition to a detonation

    Multiscale modeling of accidental explosions and detonations

    Get PDF
    pre-printAccidental explosions are exceptionally dangerous and costly, both in lives and money. Regarding worldwide conflict with small arms and light weapons, the Small Arms Survey has recorded more than 297 accidental explosions in munitions depots across the world that have resulted in thousands of deaths and billions of dollars in damage in the past decade alone.1 As the recent fertilizer plant explosion that killed 15 people in the town of West, Texas demonstrates, accidental explosions aren't limited to military operations. Transportation accidents also pose risks, as illustrated by the occasional train derailment/explosion in the nightly news, or the semi-truck explosion detailed in the following section. Unlike other industrial accident scenarios, explosions can easily affect the general public, a dramatic example being the Pacific Engineering and Production Company of Nevada (PEPCON) plant disaster in 1988, where windows were shattered, doors were blown off their hinges, and flying glass and debris caused injuries up to 10 miles away

    Doctor of Philosophy

    Get PDF
    dissertationSolutions to Partial Di erential Equations (PDEs) are often computed by discretizing the domain into a collection of computational elements referred to as a mesh. This solution is an approximation with an error that decreases as the mesh spacing decreases. However, decreasing the mesh spacing also increases the computational requirements. Adaptive mesh re nement (AMR) attempts to reduce the error while limiting the increase in computational requirements by re ning the mesh locally in regions of the domain that have large error while maintaining a coarse mesh in other portions of the domain. This approach often provides a solution that is as accurate as that obtained from a much larger xed mesh simulation, thus saving on both computational time and memory. However, historically, these AMR operations often limit the overall scalability of the application. Adapting the mesh at runtime necessitates scalable regridding and load balancing algorithms. This dissertation analyzes the performance bottlenecks for a widely used regridding algorithm and presents two new algorithms which exhibit ideal scalability. In addition, a scalable space- lling curve generation algorithm for dynamic load balancing is also presented. The performance of these algorithms is analyzed by determining their theoretical complexity, deriving performance models, and comparing the observed performance to those performance models. The models are then used to predict performance on larger numbers of processors. This analysis demonstrates the necessity of these algorithms at larger numbers of processors. This dissertation also investigates methods to more accurately predict workloads based on measurements taken at runtime. While the methods used are not new, the application of these methods to the load balancing process is. These methods are shown to be highly accurate and able to predict the workload within 3% error. By improving the accuracy of these estimations, the load imbalance of the simulation can be reduced, thereby increasing the overall performance

    Doctor of Philosophy

    Get PDF
    dissertationRecent trends in high performance computing present larger and more diverse computers using multicore nodes possibly with accelerators and/or coprocessors and reduced memory. These changes pose formidable challenges for applications code to attain scalability. Software frameworks that execute machine-independent applications code using a runtime system that shields users from architectural complexities oer a portable solution for easy programming. The Uintah framework, for example, solves a broad class of large-scale problems on structured adaptive grids using fluid-flow solvers coupled with particle-based solids methods. However, the original Uintah code had limited scalability as tasks were run in a predefined order based solely on static analysis of the task graph and used only message passing interface (MPI) for parallelism. By using a new hybrid multithread and MPI runtime system, this research has made it possible for Uintah to scale to 700K central processing unit (CPU) cores when solving challenging fluid-structure interaction problems. Those problems often involve moving objects with adaptive mesh refinement and thus with highly variable and unpredictable work patterns. This research has also demonstrated an ability to run capability jobs on the heterogeneous systems with Nvidia graphics processing unit (GPU) accelerators or Intel Xeon Phi coprocessors. The new runtime system for Uintah executes directed acyclic graphs of computational tasks with a scalable asynchronous and dynamic runtime system for multicore CPUs and/or accelerators/coprocessors on a node. Uintah's clear separation between application and runtime code has led to scalability increases without significant changes to application code. This research concludes that the adaptive directed acyclic graph (DAG)-based approach provides a very powerful abstraction for solving challenging multiscale multiphysics engineering problems. Excellent scalability with regard to the different processors and communications performance are achieved on some of the largest and most powerful computers available today

    Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    Full text link

    High Performance Computing Facility Operational Assessment, 2012 Oak Ridge Leadership Computing Facility

    Full text link

    ISCR Annual Report: Fical Year 2004

    Full text link
    corecore