468 research outputs found

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    HPC-enabling technologies for high-fidelity combustion simulations

    Get PDF
    With the increase in computational power in the last decade and the forthcoming Exascale supercomputers, a new horizon in computational modelling and simulation is envisioned in combustion science. Considering the multiscale and multiphysics characteristics of turbulent reacting flows, combustion simulations are considered as one of the most computationally demanding applications running on cutting-edge supercomputers. Exascale computing opens new frontiers for the simulation of combustion systems as more realistic conditions can be achieved with high-fidelity methods. However, an efficient use of these computing architectures requires methodologies that can exploit all levels of parallelism. The efficient utilization of the next generation of supercomputers needs to be considered from a global perspective, that is, involving physical modelling and numerical methods with methodologies based on High-Performance Computing (HPC) and hardware architectures. This review introduces recent developments in numerical methods for large-eddy simulations (LES) and direct-numerical simulations (DNS) to simulate combustion systems, with focus on the computational performance and algorithmic capabilities. Due to the broad scope, a first section is devoted to describe the fundamentals of turbulent combustion, which is followed by a general description of state-of-the-art computational strategies for solving these problems. These applications require advanced HPC approaches to exploit modern supercomputers, which is addressed in the third section. The increasing complexity of new computing architectures, with tightly coupled CPUs and GPUs, as well as high levels of parallelism, requires new parallel models and algorithms exposing the required level of concurrency. Advances in terms of dynamic load balancing, vectorization, GPU acceleration and mesh adaptation have permitted to achieve highly-efficient combustion simulations with data-driven methods in HPC environments. Therefore, dedicated sections covering the use of high-order methods for reacting flows, integration of detailed chemistry and two-phase flows are addressed. Final remarks and directions of future work are given at the end. }The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the CoEC project, grant agreement No. 952181 and the CoE RAISE project grant agreement no. 951733.Peer ReviewedPostprint (published version

    Approachable Error Bounded Lossy Compression

    Get PDF
    Compression is commonly used in HPC applications to move and store data. Traditional lossless compression, however, does not provide adequate compression of floating point data often found in scientific codes. Recently, researchers and scientists have turned to lossy compression techniques that approximate the original data rather than reproduce it in order to achieve desired levels of compression. Typical lossy compressors do not bound the errors introduced into the data, leading to the development of error bounded lossy compressors (EBLC). These tools provide the desired levels of compression as mathematical guarantees on the errors introduced. However, the current state of EBLC leaves much to be desired. The existing EBLC all have different interfaces requiring codes to be changed to adopt new techniques; EBLC have many more configuration options than their predecessors, making them more difficult to use; and EBLC typically bound quantities like point wise errors rather than higher level metrics such as spectra, p-values, or test statistics that scientists typically use. My dissertation aims to provide a uniform interface to compression and to develop tools to allow application scientists to understand and apply EBLC. This dissertation proposal presents three groups of work: LibPressio, a standard interface for compression and analysis; FRaZ/LibPressio-Opt frameworks for the automated configuration of compressors using LibPressio; and work on tools for analyzing errors in particular domains

    Science and Technology Review December 2011

    Full text link

    Training deep material networks to reproduce creep loading of short fiber-reinforced thermoplastics with an inelastically-informed strategy

    Get PDF
    Deep material networks (DMNs) are a recent multiscale technology which enable running concurrent multiscale simulations on industrial scale with the help of powerful surrogate models for the micromechanical problem. Classically, the parameters of the DMNs are identified based on linear elastic precomputations. Once the parameters are identified, DMNs may process inelastic material models and were shown to reproduce micromechanical full-field simulations with the original microstructure to high accuracy. The work at hand was motivated by creep loading of thermoplastic components with fiber reinforcement. In this context, multiple scales appear, both in space (due to the reinforcements) and in time (short- and long-term effects). We demonstrate by computational examples that the classical training strategy based on linear elastic precomputations is not guaranteed to produce DMNs whose long-term creep response accurately matches high-fidelity computations. As a remedy, we propose an inelastically informed early stopping strategy for the offline training of the DMNs. Moreover, we introduce a novel strategy based on a surrogate material model, which shares the principal nonlinear effects with the true model but is significantly less expensive to evaluate. For the problem at hand, this strategy enables saving significant time during the parameter identification process. We demonstrate that the novel strategy provides DMNs which reliably generalize to creep loading

    An assessment of high overall pressure ratio intercooled engines for civil aviation

    Get PDF
    As gas turbine technology matures, further significant improvements in engine efficiency will be difficult to achieve without the implementation of new aero-engine configurations. This thesis delivers an original contribution to knowledge by comparing the design, performance, fuel burn and emission characteristics of a novel geared intercooled reversed flow core concept with those of a conventional geared intercooled straight flow core concept. This thesis also outlines a novel methodology for the characterisation of uncertainty at the conceptual design phase which is useful for the comparison of competing concepts. Conventional intercooled aero-engine concepts suffer from high over-tip leakage losses in the high pressure compressor, high pressure losses in the intercooler installation and increased weight and drag whereas the geared intercooled reversed flow core concept overcomes some of these limitations. The HP-spool configuration of the reversed core concept allows for an increase in blade height, a reduction in over-tip leakage losses and an increase in overall pressure ratio. It was concluded that a 1-pass intercooler would be the lightest and most compact design while a 2-pass intercooler would be easier to manufacture. In the reversed flow core concept the increased length of the 2-pass intercooler could be accommodated. In this concept the mixer also allows for a reduction in fan pressure ratio and a useful reduction in component losses. Both intercooled concepts were shown to benefit from the use of a variable area bypass nozzle for the reduction of take-off combustor outlet temperature and cruise specific fuel consumption. The intercooled cycles were optimised for minimum fuel burn and it was found that the reversed flow core concept benefits from higher overall pressure ratio and lower fan pressure ratio for an equivalent specific thrust. This leads to an improvement in thermal efficiency and more than a 1.6% improvement in block fuel burn. The NOx during landing and take-off as well as during cruise was found to be slightly more severe for the reversed flow core concept due to its higher overall pressure ratio. The contrails emissions of this concept were occasionally higher than for a year 2000 turbofan but only slightly higher than for the straight core concept. This dissertation shows that in spite of input uncertainty the reversed flow core intercooled engine is a promising concept. Further research should focus on higher fidelity structural and aerodynamic modelling
    corecore