19 research outputs found

    Assessment of numerical issues in one-dimensional detonation wave representation

    Get PDF
    This thesis was an attempt to simulate the detonation waves and contact surfaces in unsteady flows with improved accuracy and efficiency through the use of adaptive mesh refinement (AMR). In the present work, the problem dealt with is the simulation of the working cycles of a pulsed detonation engine (PDE). A flexible code which can be used for any unsteady flow simulation was developed. The analysis was based on the quasi one-dimensional Euler equations and the reaction rate was modelled using a one-step irreversible reaction equation. The numerical simulations were carried out using two numerical schemes, namely. Roe\u27s approximate Riemann solver and the advection upstream splitting method (AUSM). Results of this numerical study show the importance and effects of increasing the spatial resolution. The use of adaptive mesh refinement made it possible to increase the spatial resolution with insignificant increases in cost of computations. The results also show that the contact surfaces cannot be captured accurately merely by increasing the spatial resolution, due to the high innate numerical diffusion of the flux schemes. The possibility of confining an interface to a few cell distances by adding a suitable confinement term has also been discussed

    Cyber Framework for Steering and Measurements Collection Over Instrument-Computing Ecosystems

    Full text link
    We propose a framework to develop cyber solutions to support the remote steering of science instruments and measurements collection over instrument-computing ecosystems. It is based on provisioning separate data and control connections at the network level, and developing software modules consisting of Python wrappers for instrument commands and Pyro server-client codes that make them available across the ecosystem network. We demonstrate automated measurement transfers and remote steering operations in a microscopy use case for materials research over an ecosystem of Nion microscopes and computing platforms connected over site networks. The proposed framework is currently under further refinement and being adopted to science workflows with automated remote experiments steering for autonomous chemistry laboratories and smart energy grid simulations.Comment: Paper accepted for presentation at IEEE SMARTCOMP 202

    Scientific Application Requirements for Leadership Computing at the Exascale

    Get PDF
    The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a relatively small increase in performance per core with a dramatic increase in the number of cores. Leadership system software must face and overcome issues that will undoubtedly be exacerbated at the exascale. The operating system (OS) must be as unobtrusive as possible and possess more stability, reliability, and fault tolerance during application execution. As applications will be more likely at the exascale to experience loss of resources during an execution, the OS must mitigate such a loss with a range of responses. New fault tolerance paradigms must be developed and integrated into applications. Just as application input and output must not be an afterthought in hardware design, job management, too, must not be an afterthought in system software design. Efficient scheduling of those resources will be a major obstacle faced by leadership computing centers at the exas..

    A computational study of auto-ignition and flame propagation in stratified mixtures relevant to modern engines.

    Full text link
    Numerical simulations are performed to study the nature of auto-ignition and flame propagation in a stratified mixture. The results of this study are expected to provide a fundamental understanding of the combustion occurring in direct injection spark ignition (DISI) and homogeneous charge compression ignition (HCCI) engines. In the first part, the effect of time varying composition on a premixed methane-air flame is studied using a counterflow configuration and the concept of dynamic flammability limit is established to quantify the extension in flammability limit under unsteady situations. In addition, the effects of blending hydrogen to methane are studied as a possible means to improve the stability of lean premixed combustion. It is found that hydrogen blending substantially affects the diffusive-thermal stability while the dynamic response is unchanged. The second part of the dissertation is devoted to a fundamental study of ignition characteristics relevant to HCCI engines. Models at various levels of complexity are attempted, ranging from a homogenous reactor model to direct numerical simulation (DNS). First, the mixing of exhaust gas recirculation (EGR) on HCCI combustion are investigated for their benefit of knock reduction. Results obtained using a homogenous reactor model suggest that the effects of EGR is predominantly thermal than chemical for the conditions under study. This leads to a closer examination of the thermo-physical aspects of EGR on HCCI combustion due to incomplete mixing and mixture stratification. High-fidelity DNS studies are thus performed to assess the effects of the initial temperature distribution on ignition and subsequent heat release. For the three test cases considered, the presence of hotter core gas leads to early ignition and increased duration of burning, while a cold core leaves dormant end gas which is consumed by slow combustion. Finally, as a more extensive parametric study to quantify the effects of mixing rate on HCCI ignition, the ignition and propagation of a reaction front in a premixed fuel/air stream mixed with hotter exhaust gases is investigated using the counterflow configuration. The results provide a systematic framework to identify two distinct regimes of ignition, namely the spontaneous propagation and the deflagration regimes. A criterion based on the ratio of the time scales of auto-ignition and diffusion is proposed to identify the transition between these two regimes. Implications of the different regimes in the development of submodels for HCCI modeling are discussed.Ph.D.Applied SciencesMechanical engineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/124554/2/3150084.pd

    A DNS study on the stabilization mechanism of a turbulent lifted ethylene jet flame in highly-heated coflow

    No full text
    Direct numerical simulation (DNS) of the near-field of a three-dimensional spatially-developing turbulent ethylene jet flame in highly-heated coflow is performed with a reduced mechanism to determine the stabilization mechanism. The DNS was performed at a jet Reynolds number of 10,000 with over 1.29 billion grid points. The results show that auto-ignition in a fuel-lean mixture at the flame base is the main source of stabilization of the lifted jet flame. The Damko¨ hler number and chemical explosive mode (CEM) analysis also verify that auto-ignition occurs at the flame base. In addition to auto-ignition, Lagrangian tracking of the flame base reveals the passage of large-scale flow structures and their correlation with the fluctuations of the flame base similar to a previous study (Yoo et al., J. Fluid Mech. 640 (2009) 53–481) with hydrogen/air jet flames. It is also observed that the present lifted flame base exhibits a cyclic ‘saw-tooth’ shaped movement marked by rapid movement upstream and slower movement downstream.This is a consequence of the lifted flame being stabilized by a balance between consecutive auto-ignition events in hot fuel-lean mixtures and convection induced by the high-speed jet and coflow velocities. This is confirmed by Lagrangian tracking of key variables including the flame-normal velocity, displacement speed, scalar dissipation rate, and mixture fraction at the stabilization point

    Experiences with High-Level Programming Directives for Porting Applications to GPUs

    No full text
    Abstract. HPC systems now exploit GPUs within their compute nodes to accelerate program performance. As a result, high-end application development has become extremely complex at the node level. In addition to restructuring the node code to exploit the cores and specialized devices, the programmer may need to choose a programming model such as OpenMP or CPU threads in conjunction with an accelerator programming model to share and manage the difference node resources. This comes at a time when programmer productivity and the ability to produce portable code has been recognized as a major concern. In order to offset the high development cost of creating CUDA or OpenCL kernels, directives have been proposed for programming accelerator devices, but their implications are not well known. In this paper, we evaluate the state of the art accelerator directives to program several applications kernels, explore transformations to achieve good performance, and examine the expressivity and performance penalty of using high-level directives versus CUDA. We also compare our results to OpenMP implementations to understand the benefits of running the kernels in the accelerator versus CPU cores

    On the effect of injection timing on the ignition of lean PRF/air/EGR mixtures under direct dual fuel stratification conditions

    No full text
    The ignition characteristics of lean primary reference fuel (PRF)/air/exhaust gas recirculation (EGR) mixture under reactivity-controlled compression ignition (RCCI) and direct duel fuel stratification (DDFS) conditions are investigated by 2-D direct numerical simulations (DNSs) with a 116-species reduced chemistry of the PRF oxidation. The 2-D DNSs of the DDFS combustion are performed by varying the injection timing of iso-octane (i-C8H18) with a pseudo-iso-octane (PC8H18) model together with a novel compression heating model to account for the compression heating and expansion cooling effects of the piston motion in an engine cylinder. The PC8H18 model is newly developed to mimic the timing, duration, and cooling effects of the direct injection of i-C8H18 onto a premixed background charge of PRF/air/EGR mixture with composition inhomogeneities. It is found that the RCCI combustion exhibits a very high peak heat release rate (HRR) with a short combustion duration due to the predominance of the spontaneous ignition mode of combustion. However, the DDFS combustion has much lower peak HRR and longer combustion duration regardless of the fuel injection timing compared to those of the RCCI combustion, which is primarily attributed to the sequential injection of i-C8H18. It is also found that the ignition delay of the DDFS combustion features a non-monotonic behavior with increasing fuel-injection timing due to the different effect of fuel evaporation on the low-, intermediate-, and high-temperature chemistry of the PRF oxidation. The budget and Damköhler number analyses verify that although a mixed combustion mode of deflagration and spontaneous ignition exists during the early phase of the DDFS combustion, the spontaneous ignition becomes predominant during the main combustion, and hence, the spread-out of heat release rate in the DDFS combustion is mainly governed by the direct injection process of i-C8H18. Finally, a misfire is observed for the DDFS combustion when the direct injection of i-C8H18 occurs during the intermediate-temperature chemistry (ITC) regime between the first- and second-stage ignition. This is because the temperature drop induced by the direct injection of i-C8H18 impedes the main ITC reactions, and hence, the main combustion fails to occur
    corecore