41,039 research outputs found
Simulation of a Hard-Spherocylinder Liquid Crystal with the pe
The pe physics engine is validated through the simulation of a liquid crystal
model system consisting of hard spherocylinders. For this purpose we evaluate
several characteristic parameters of this system, namely the nematic order
parameter, the pressure, and the Frank elastic constants. We compare these to
the values reported in literature and find a very good agreement, which
demonstrates that the pe physics engine can accurately treat such densely
packed particle systems. Simultaneously we are able to examine the influence of
finite size effects, especially on the evaluation of the Frank elastic
constants, as we are far less restricted in system size than earlier
simulations
Enhancing Energy Production with Exascale HPC Methods
High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose
processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale
simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of
Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and
from the Brazilian Ministry of Science, Technology and Innovation through Rede
Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the
Intel Corporation, which enabled us to obtain the presented experimental results in
uncertainty quantification in seismic imagingPostprint (author's final draft
Adaptive mesh refinement computation of acoustic radiation from an engine intake
A block-structured adaptive mesh refinement (AMR) method was applied to the computational problem of acoustic radiation from an aeroengine intake. The aim is to improve the computational and storage efficiency in aeroengine noise prediction through reduction of computational cells. A parallel implementation of the adaptive mesh refinement algorithm was achieved using message passing interface. It combined a range of 2nd- and 4th-order spatial stencils, a 4th-order low-dissipation and low-dispersion Runge–Kutta scheme for time integration and several different interpolation methods. Both the parallel AMR algorithms and numerical issues were introduced briefly in this work. To solve the problem of acoustic radiation from an aeroengine intake, the code was extended to support body-fitted grid structures. The problem of acoustic radiation was solved with linearised Euler equations. The AMR results were compared with the previous results computed on a uniformly fine mesh to demonstrate the accuracy and the efficiency of the current AMR strategy. As the computational load of the whole adaptively refined mesh has to be balanced between nodes on-line, the parallel performance of the existing code deteriorates along with the increase of processors due to the expensive inter-nodes memory communication costs. The potential solution was suggested in the end
GTTC Future of Ground Testing Meta-Analysis of 20 Documents
National research, development, test, and evaluation ground testing capabilities in the United States are at risk. There is a lack of vision and consensus on what is and will be needed, contributing to a significant threat that ground test capabilities may not be able to meet the national security and industrial needs of the future. To support future decisions, the AIAA Ground Testing Technical Committees (GTTC) Future of Ground Test (FoGT) Working Group selected and reviewed 20 seminal documents related to the application and direction of ground testing. Each document was reviewed, with the content main points collected and organized into sections in the form of a gap analysis current state, future state, major challenges/gaps, and recommendations. This paper includes key findings and selected commentary by an editing team
Developments in GRworkbench
The software tool GRworkbench is an ongoing project in visual, numerical
General Relativity at The Australian National University. Recently, GRworkbench
has been significantly extended to facilitate numerical experimentation in
analytically-defined space-times. The numerical differential geometric engine
has been rewritten using functional programming techniques, enabling objects
which are normally defined as functions in the formalism of differential
geometry and General Relativity to be directly represented as function
variables in the C++ code of GRworkbench. The new functional differential
geometric engine allows for more accurate and efficient visualisation of
objects in space-times and makes new, efficient computational techniques
available. Motivated by the desire to investigate a recent scientific claim
using GRworkbench, new tools for numerical experimentation have been
implemented, allowing for the simulation of complex physical situations.Comment: 14 pages. To appear A. Moylan, S.M. Scott and A.C. Searle,
Developments in GRworkbench. Proceedings of the Tenth Marcel Grossmann
Meeting on General Relativity, editors M. Novello, S. Perez-Bergliaffa and R.
Ruffini. Singapore: World Scientific 200
Numerical Aerodynamic Simulation (NAS)
The history of the Numerical Aerodynamic Simulation Program, which is designed to provide a leading-edge capability to computational aerodynamicists, is traced back to its origin in 1975. Factors motivating its development and examples of solutions to successively refined forms of the governing equations are presented. The NAS Processing System Network and each of its eight subsystems are described in terms of function and initial performance goals. A proposed usage allocation policy is discussed and some initial problems being readied for solution on the NAS system are identified
Numerical propulsion system simulation: An interdisciplinary approach
The tremendous progress being made in computational engineering and the rapid growth in computing power that is resulting from parallel processing now make it feasible to consider the use of computer simulations to gain insights into the complex interactions in aerospace propulsion systems and to evaluate new concepts early in the design process before a commitment to hardware is made. Described here is a NASA initiative to develop a Numerical Propulsion System Simulation (NPSS) capability
A SVD accelerated kernel-independent fast multipole method and its application to BEM
The kernel-independent fast multipole method (KIFMM) proposed in [1] is of
almost linear complexity. In the original KIFMM the time-consuming M2L
translations are accelerated by FFT. However, when more equivalent points are
used to achieve higher accuracy, the efficiency of the FFT approach tends to be
lower because more auxiliary volume grid points have to be added. In this
paper, all the translations of the KIFMM are accelerated by using the singular
value decomposition (SVD) based on the low-rank property of the translating
matrices. The acceleration of M2L is realized by first transforming the
associated translating matrices into more compact form, and then using low-rank
approximations. By using the transform matrices for M2L, the orders of the
translating matrices in upward and downward passes are also reduced. The
improved KIFMM is then applied to accelerate BEM. The performance of the
proposed algorithms are demonstrated by three examples. Numerical results show
that, compared with the original KIFMM, the present method can reduce about 40%
of the iterating time and 25% of the memory requirement.Comment: 19 pages, 4 figure
Polynomial Response Surface Approximations for the Multidisciplinary Design Optimization of a High Speed Civil Transport
Surrogate functions have become an important tool in multidisciplinary design optimization to deal with noisy functions, high computational cost, and the practical difficulty of integrating legacy disciplinary computer codes. A combination of mathematical, statistical, and engineering techniques, well known in other contexts, have made polynomial surrogate functions viable for MDO. Despite the obvious limitations imposed by sparse high fidelity data in high dimensions and the locality of low order polynomial approximations, the success of the panoply of techniques based on polynomial response surface approximations for MDO shows that the implementation details are more important than the underlying approximation method (polynomial, spline, DACE, kernel regression, etc.). This paper surveys some of the ancillary techniques—statistics, global search, parallel computing, variable complexity modeling—that augment the construction and use of polynomial surrogates
- …