234 research outputs found

    Performance analysis integration in the Uintah software development cycle

    Get PDF
    ManuscriptThe increasing complexity of high-performance computing environments and programming methodologies presents challenges for empirical performance evaluation. Evolving parallel and distributed systems require performance technology that can be flexibly configured to observe different events and associated performance data of interest. It must also be possible to integrate performance evaluation techniques with the programming paradigms and software engineering methods. This is particularly important for tracking performance on parallel software projects involving many code teams over many stages of development. This paper describes the integration of the TAU and XPARE tools in the Uintah Computational Framework (UCF). Discussed is the use of performance mapping techniques to associate low-level performance data to higher levels of abstraction in UCF and the use of performance regression testing to provides a historical portfolio of the evolution of application performance. A scalability study shows the benefits of integrating performance technology in building large-scale parallel applications

    Uintah: a massively parallel problem solving environment

    Get PDF
    Journal ArticleThis paper describes Uintah, a component-based visual problem solving environment (PSE) that is designed to specifically address the unique problems of massively parallel computation on terascale computing platforms. Uintah supports the entire life cycle of scientific applications by allowing scientific programmers to quickly and easily develop new techniques, debug new implementations, and apply known algorithms to solve novel problems. Uintah is built on three principles: 1) As much as possible, the complexities of parallel execution should be handled for the scientist, 2) software should be reusable at the component level, and 3) scientists should be able to dynamically steer and visualize their simulation results as the simulation executes. To provide this functionality, Uintah builds upon the best features of the SCIRun PSE and the DOE Common Component Architecture (CCA)

    A survey of high level frameworks in block-structured adaptive mesh refinement packages

    Get PDF
    pre-printOver the last decade block-structured adaptive mesh refinement (SAMR) has found increasing use in large, publicly available codes and frameworks. SAMR frameworks have evolved along different paths. Some have stayed focused on specific domain areas, others have pursued a more general functionality, providing the building blocks for a larger variety of applications. In this survey paper we examine a representative set of SAMR packages and SAMR-based codes that have been in existence for half a decade or more, have a reasonably sized and active user base outside of their home institutions, and are publicly available. The set consists of a mix of SAMR packages and application codes that cover a broad range of scientific domains. We look at their high-level frameworks, their design trade-offs and their approach to dealing with the advent of radical changes in hardware architecture. The codes included in this survey are BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah

    Master of Science

    Get PDF
    thesisRecent advancements in High Performance Computing (HPC) infrastructure with tradi- tional computing systems augmented with accelerators like graphic processing units (GPUs) and coprocessors like Intel Xeon Phi have successfully enabled predictive simulations specifi- cally Computational Fluid Dynamics (CFD) with more accuracy and speed. One of the most significant challenges in high-performance computing is to provide a software framework that can scale efficiently and minimize rewriting code to support diverse hardware configurations. Algorithms and framework support have been developed to deal with complexities and provide abstractions for a task to be compatible with various hardware targets. Software is written in C++ and represented as a Directed Acyclic Graph (DAG) with nodes that implement actual mathematical calculations. This thesis will present an improved approach for scheduling and execution of computational tasks within a heterogeneous CPU-GPU com- puting system insulting application developers with the inherent complexity in parallelism. The details will be presented within a context to facilitate the solution of partial differential equations on large clusters using graph theory

    Data integration for materials research

    Get PDF

    Doctor of Philosophy

    Get PDF
    The processes of making biodiesel from algae include the following essential steps: the growth of algae in a photobioreactor, lipid extraction to harvest the biocrude, and transesterification to turn biocrude into biodiesel. The objective of this researc

    Using the generalized interpolation material point method for fluid-solid interactions induced by surface tension

    Get PDF
    This thesis is devoted to the development of new, Generalized Interpolation Material Point Method (GIMP)-based algorithms for handling surface tension and contact (wetting) in fluid-solid interaction (FSI) problems at small scales. In these problems, surface tension becomes so dominant that its influence on both fluids and solids must be considered. Since analytical solutions for most engineering problems are usually unavailable, numerical methods are needed to describe and predict complicated time-dependent states in the solid and fluid involved due to surface tension effects. Traditional computational methods for handling fluid-solid interactions may not be effective due to their weakness in solving large-deformation problems and the complicated coupling of two different types of computational frameworks: one for solid, and the other for fluid. On the contrary, GIMP, a mesh-free algorithm for solid mechanics problems, is numerically effective in handling problems involving large deformations and fracture. Here we extend the capability of GIMP to handle fluid dynamics problems with surface tension, and to develop a new contact algorithm to deal with the wetting boundary conditions that include the modeling of contact angle and slip near the triple points where the three phases -- fluid, solid, and vapor -- meet. The error of the new GIMP algorithm for FSI problems at small scales, as verified by various benchmark problems, generally falls within the 5% range. In this thesis, we have successfully extended the capability of GIMP for handling FSI problems under surface tension in a one-solver numerical framework, a unique and innovative approach.Chapter 1. Introduction -- Chapter 2. Using the generalized interpolation material point method for fluid dynamics at low reynolds numbers -- Chapter 3. On the modeling of surface tension and its applications by the generalized interpolation material point method -- Chapter 4. Using the generalized interpolation material point method for fluid-solid interactions induced by surface tension -- Chapter 5. Conclusions

    Unconventional Oil and Gas Development: Evaluation of selected hydrocarbons in the ambient air of three basins in the United States by means of diffusive sampling measurements

    Get PDF
    The impact of emissions associated with the extraction of crude oil and natural gas upon air quality in the United States (US) is widely recognised to have an impact on climate change, human health and ground-level ozone formation. A number of measurement approaches are being applied to evaluate the environmental impact of the oil and gas (O&G) sector, including satellite, airborne and ground-based platforms. Measurement based studies, in particular those that estimate flux rates, are critical for the validation of emission inventories that often under-report actual emissions of methane and volatile organic compounds (VOC) from the O&G sector. On-going research projects in the US are investigating the consistency of emission rates from O&G emission sources associated with extraction, transmission and distribution activities. The leakage rates of methane, as related to production levels, in the US for O&G developments varies from less than 1% (e.g. Upper Green River Basin, Wyoming) to over 6% (Uintah Basin, Utah). European research and policy approaches can learn from efforts in the US that are improving the accuracy of reporting emissions from O&G sources, enhancing our understanding of air quality impacts, and reducing emissions through regulatory controls. The Joint Research Centre (JRC) of the European Commission performed a diffusive sampling project, with the collaboration of the University of Wyoming, in conjunction with the SONGNEX (Studying the Atmospheric Effects of Changing Energy Use in the US at the Nexus of Air Quality and Climate Change) project led by the US National Oceanic and Atmospheric Administration. The SONGNEX project is an airborne measurement campaign supported by a number of associated ground based studies. The applicability of the Pocket Diffusive (PoD) sampler, for measurement of VOC (C4-C10), heavy hydrocarbons and volatile polycyclic aromatic hydrocarbons (PAHs) in areas heavily influenced by O&G development, is evaluated. Three sampling surveys were performed to assess three basins (Upper Green River, Uintah and North Platte) characterised by different management regimes, meteorology and hydrocarbon products. This first extensive field deployment of the PoD sampler demonstrates the effectiveness of the sampler for time-integrated measurements of targeted pollutants over wide spatial areas. The ambient air at these basins reveal different compositional profiles of hydrocarbons (C4-C10). Analysis of aromatics supports a finding of relatively elevated levels in the Pinedale Anticline (Upper Green River). From an evaluation of the behaviour of alkanes, it is evident that there is a relatively high leakage rate in the Uintah Basin. Heavy hydrocarbons (C11-C22) and PAHs are measured at relatively low levels. Despite low concentrations, analysis of these compounds improves the accuracy of source identification. A comparison of ground based PoD data and airborne SONGNEX data showed good agreement for commonly reported VOCs. The utility of the PoD sampler for analysis of emission sources was enhanced with reporting of a wide range of compounds. Spatial Positive Matrix Factorization analysis showed the possibility of using PoD samplers for differentiating emission sources, characterizing different areas and estimating the relative contribution of different emission sources.JRC.C.5-Air and Climat

    Doctor of Philosophy

    Get PDF
    dissertationRadiation is the dominant mode of heat transfer in high temperature combustion environments. Radiative heat transfer affects the gas and particle phases, including all the associated combustion chemistry. The radiative properties are in turn affected by the turbulent flow field. This bi-directional coupling of radiation turbulence interactions poses a major challenge in creating parallel-capable, high-fidelity combustion simulations. In this work, a new model was developed in which reciprocal monte carlo radiation was coupled with a turbulent, large-eddy simulation combustion model. A technique wherein domain patches are stitched together was implemented to allow for scalable parallelism. The combustion model runs in parallel on a decomposed domain. The radiation model runs in parallel on a recomposed domain. The recomposed domain is stored on each processor after information sharing of the decomposed domain is handled via the message passing interface. Verification and validation testing of the new radiation model were favorable. Strong scaling analyses were performed on the Ember cluster and the Titan cluster for the CPU-radiation model and GPU-radiation model, respectively. The model demonstrated strong scaling to over 1,700 and 16,000 processing cores on Ember and Titan, respectively
    • …
    corecore