468 research outputs found
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Recommended from our members
High-fidelity error injection and acceleration techniques
As technology scales down, the likelihood of hardware errors that silently corrupt the results of applications is increasing. Evaluating the resilience of applications against hardware errors is thus of significant concern. Current evaluation techniques via error injection are either low-fidelity or inefficient in terms of using computing resources. This dissertation demonstrates that sophisticated integration of injectors across abstraction layers and novel sampling algorithms can significantly improve both the fidelity and efficiency. Specifically, this dissertation describes an open-source instruction-level error injector that generates high-fidelity hardware errors due to particle strikes and voltage droops. Two acceleration techniques, nested Monte Carlo and Injection-Point Overprovisioning, are proposed to speed up error injection campaigns by 1−2 orders of magnitude. This dissertation also answers the question of when high-fidelity is needed to evaluate the impact of hardware errors on applications and the effectiveness of error detectors.Electrical and Computer Engineerin
HPC-enabling technologies for high-fidelity combustion simulations
With the increase in computational power in the last decade and the forthcoming Exascale supercomputers, a new horizon in computational modelling and simulation is envisioned in combustion science. Considering the multiscale and multiphysics characteristics of turbulent reacting flows, combustion simulations are considered as one of the most computationally demanding applications running on cutting-edge supercomputers. Exascale computing opens new frontiers for the simulation of combustion systems as more realistic conditions can be achieved with high-fidelity methods. However, an efficient use of these computing architectures requires methodologies that can exploit all levels of parallelism. The efficient utilization of the next generation of supercomputers needs to be considered from a global perspective, that is, involving physical modelling and numerical methods with methodologies based on High-Performance Computing (HPC) and hardware architectures. This review introduces recent developments in numerical methods for large-eddy simulations (LES) and direct-numerical simulations (DNS) to simulate combustion systems, with focus on the computational performance and algorithmic capabilities. Due to the broad scope, a first section is devoted to describe the fundamentals of turbulent combustion, which is followed by a general description of state-of-the-art computational strategies for solving these problems. These applications require advanced HPC approaches to exploit modern supercomputers, which is addressed in the third section. The increasing complexity of new computing architectures, with tightly coupled CPUs and GPUs, as well as high levels of parallelism, requires new parallel models and algorithms exposing the required level of concurrency. Advances in terms of dynamic load balancing, vectorization, GPU acceleration and mesh adaptation have permitted to achieve highly-efficient combustion simulations with data-driven methods in HPC environments. Therefore, dedicated sections covering the use of high-order methods for reacting flows, integration of detailed chemistry and two-phase flows are addressed. Final remarks and directions of future work are given at the end.
}The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the CoEC project, grant agreement No. 952181 and the CoE RAISE project grant agreement no. 951733.Peer ReviewedPostprint (published version
Approachable Error Bounded Lossy Compression
Compression is commonly used in HPC applications to move and store data. Traditional lossless compression, however, does not provide adequate compression of floating point data often found in scientific codes. Recently, researchers and scientists have turned to lossy compression techniques that approximate the original data rather than reproduce it in order to achieve desired levels of compression. Typical lossy compressors do not bound the errors introduced into the data, leading to the development of error bounded lossy compressors (EBLC). These tools provide the desired levels of compression as mathematical guarantees on the errors introduced. However, the current state of EBLC leaves much to be desired. The existing EBLC all have different interfaces requiring codes to be changed to adopt new techniques; EBLC have many more configuration options than their predecessors, making them more difficult to use; and EBLC typically bound quantities like point wise errors rather than higher level metrics such as spectra, p-values, or test statistics that scientists typically use. My dissertation aims to provide a uniform interface to compression and to develop tools to allow application scientists to understand and apply EBLC. This dissertation proposal presents three groups of work: LibPressio, a standard interface for compression and analysis; FRaZ/LibPressio-Opt frameworks for the automated configuration of compressors using LibPressio; and work on tools for analyzing errors in particular domains
Training deep material networks to reproduce creep loading of short fiber-reinforced thermoplastics with an inelastically-informed strategy
Deep material networks (DMNs) are a recent multiscale technology which enable running concurrent multiscale simulations on industrial scale with the help of powerful surrogate models for the micromechanical problem. Classically, the parameters of the DMNs are identified based on linear elastic precomputations. Once the parameters are identified, DMNs may process inelastic material models and were shown to reproduce micromechanical full-field simulations with the original microstructure to high accuracy. The work at hand was motivated by creep loading of thermoplastic components with fiber reinforcement. In this context, multiple scales appear, both in space (due to the reinforcements) and in time (short- and long-term effects). We demonstrate by computational examples that the classical training strategy based on linear elastic precomputations is not guaranteed to produce DMNs whose long-term creep response accurately matches high-fidelity computations. As a remedy, we propose an inelastically informed early stopping strategy for the offline training of the DMNs. Moreover, we introduce a novel strategy based on a surrogate material model, which shares the principal nonlinear effects with the true model but is significantly less expensive to evaluate. For the problem at hand, this strategy enables saving significant time during the parameter identification process. We demonstrate that the novel strategy provides DMNs which reliably generalize to creep loading
Recommended from our members
Facilitating the Use of Optimisation in the Aerodynamic Design of Axial Compressors
There is commercial pressure to design axial compressors exhibiting high levels of performance more quickly. This is despite the performance of these machines approaching an asymptote in recent years, with further gains becoming increasingly difficult to achieve. One tool that can be used to help is optimisation, effectively harnessing the speed of computational analysis to accelerate the design process and unlock additional performance improvements. The greatest potential for optimisation exists at the preliminary design stage, however, current methodologies struggle when applied at this early point in the design process due to inadequate problem formulations, an inability to fulfil the role of enhancing designer understanding and a lack of high-fidelity analysis due to computational cost. The goal of this thesis is to facilitate the use of optimisation in the preliminary aerodynamic design of axial compressors by developing an improved methodology that overcomes these limitations.
The multiple dominance relations (MDR) formulation enables a larger number of performance parameters to be incorporated in a way that accurately reflects the desires of the designer. This is implemented within a Tabu Search (TS) that is capable of providing interpretable design development information to enhance designer understanding. The combined MDRTS algorithm, overcoming the limitations associated with formulation and understanding, outperforms existing methods when applied to analytic, aerofoil and six-stage axial compressor test cases, generating computational savings of up to 80%.
Multi-fidelity techniques are used to accelerate the search by conducting analysis on a "need-to-know'' basis. Computational savings of over 70% are observed compared to the single-fidelity version of the algorithm across the analytic, aerofoil and six-stage axial compressor test cases, enabling high-fidelity analysis to be employed in a computationally efficient manner. The resultant methodology represents a novel and inherently flexible multi-level multi-fidelity optimisation technique.
Application to an N-stage axial compressor test case, in which the optimiser is given control over the number of stages in the machine, demonstrates the capabilities of the accelerated MDRTS approach. The complex design space is effectively navigated, generating computational savings of over 90% compared to existing methodologies and producing designs that are more likely to be of interest to the designer. Interpretable design development information is also provided for this problem to enhance designer understanding. These results show that the improved methodology successfully facilitates the use of optimisation in the preliminary aerodynamic design of axial compressors, overcoming the problems associated with formulation, understanding and speed that limit existing approaches
An assessment of high overall pressure ratio intercooled engines for civil aviation
As gas turbine technology matures, further significant improvements in engine efficiency will be difficult to achieve without the implementation of new aero-engine configurations. This thesis delivers an original contribution to knowledge by comparing the design, performance, fuel burn and emission characteristics of a novel geared intercooled reversed flow core concept with those of a conventional geared intercooled straight flow core concept. This thesis also outlines a novel methodology for the characterisation of uncertainty at the conceptual design phase which is useful for the comparison of competing concepts. Conventional intercooled aero-engine concepts suffer from high over-tip leakage losses in the high pressure compressor, high pressure losses in the intercooler installation and increased weight and drag whereas the geared intercooled reversed flow core concept overcomes some of these limitations.
The HP-spool configuration of the reversed core concept allows for an increase in blade height, a reduction in over-tip leakage losses and an increase in overall pressure ratio. It was concluded that a 1-pass intercooler would be the lightest and most compact design while a 2-pass intercooler would be easier to manufacture. In the reversed flow core concept the increased length of the 2-pass intercooler could be accommodated. In this concept the mixer also allows for a reduction in fan pressure ratio and a useful reduction in component losses. Both intercooled concepts were shown to benefit from the use of a variable area bypass nozzle for the reduction of take-off combustor outlet temperature and cruise specific fuel consumption.
The intercooled cycles were optimised for minimum fuel burn and it was found that the reversed flow core concept benefits from higher overall pressure ratio and lower fan pressure ratio for an equivalent specific thrust. This leads to an improvement in thermal efficiency and more than a 1.6% improvement in block fuel burn. The NOx during landing and take-off as well as during cruise was found to be slightly more severe for the reversed flow core concept due to its higher overall pressure ratio. The contrails emissions of this concept were occasionally higher than for a year 2000 turbofan but only slightly higher than for the straight core concept. This dissertation shows that in spite of input uncertainty the reversed flow core intercooled engine is a promising concept. Further research should focus on higher fidelity structural and aerodynamic modelling
- …