11,604 research outputs found
PHYSICS-AWARE MODEL SIMPLIFICATION FOR INTERACTIVE VIRTUAL ENVIRONMENTS
Rigid body simulation is an integral part of Virtual Environments (VE) for autonomous planning, training, and design tasks. The underlying physics-based simulation of VE must be accurate and computationally fast enough for the intended application, which unfortunately are conflicting requirements. Two ways to perform fast and high fidelity physics-based simulation are: (1) model simplification, and (2) parallel computation. Model simplification can be used to allow simulation at an interactive rate while introducing an acceptable level of error. Currently, manual model simplification is the most common way of performing simulation speedup but it is time consuming. Hence, in order to reduce the development time of VEs, automated model simplification is needed. The dissertation presents an automated model simplification approach based on geometric reasoning, spatial decomposition, and temporal coherence. Geometric reasoning is used to develop an accessibility based algorithm for removing portions of geometric models that do not play any role in rigid body to rigid body interaction simulation. Removing such inaccessible portions of the interacting rigid body models has no influence on the simulation accuracy but reduces computation time significantly. Spatial decomposition is used to develop a clustering algorithm that reduces the number of fluid pressure computations resulting in significant speedup of rigid body and fluid interaction simulation. Temporal coherence algorithm reuses the computed force values from rigid body to fluid interaction based on the coherence of fluid surrounding the rigid body. The simulations are further sped up by performing computing on graphics processing unit (GPU). The dissertation also presents the issues pertaining to the development of parallel algorithms for rigid body simulations both on multi-core processors and GPU. The developed algorithms have enabled real-time, high fidelity, six degrees of freedom, and time domain simulation of unmanned sea surface vehicles (USSV) and can be used for autonomous motion planning, tele-operation, and learning from demonstration applications
Adaptive GPU-accelerated force calculation for interactive rigid molecular docking using haptics
Molecular docking systems model and simulate in silico the interactions of intermolecular binding. Haptics-assisted docking enables the user to interact with the simulation via their sense of touch but a stringent time constraint on the computation of forces is imposed due to the sensitivity of the human haptic system. To simulate high fidelity smooth and stable feedback the haptic feedback loop should run at rates of 500 Hz to 1 kHz. We present an adaptive force calculation approach that can be executed in parallel on a wide range of Graphics Processing Units (GPUs) for interactive haptics-assisted docking with wider applicability to molecular simulations. Prior to the interactive session either a regular grid or an octree is selected according to the available GPU memory to determine the set of interatomic interactions within a cutoff distance. The total force is then calculated from this set. The approach can achieve force updates in less than 2 ms for molecular structures comprising hundreds of thousands of atoms each, with performance improvements of up to 90 times the speed of current CPU-based force calculation approaches used in interactive docking. Furthermore, it overcomes several computational limitations of previous approaches such as pre-computed force grids, and could potentially be used to model receptor flexibility at haptic refresh rates
A GPU-accelerated package for simulation of flow in nanoporous source rocks with many-body dissipative particle dynamics
Mesoscopic simulations of hydrocarbon flow in source shales are challenging,
in part due to the heterogeneous shale pores with sizes ranging from a few
nanometers to a few micrometers. Additionally, the sub-continuum fluid-fluid
and fluid-solid interactions in nano- to micro-scale shale pores, which are
physically and chemically sophisticated, must be captured. To address those
challenges, we present a GPU-accelerated package for simulation of flow in
nano- to micro-pore networks with a many-body dissipative particle dynamics
(mDPD) mesoscale model. Based on a fully distributed parallel paradigm, the
code offloads all intensive workloads on GPUs. Other advancements, such as
smart particle packing and no-slip boundary condition in complex pore
geometries, are also implemented for the construction and the simulation of the
realistic shale pores from 3D nanometer-resolution stack images. Our code is
validated for accuracy and compared against the CPU counterpart for speedup. In
our benchmark tests, the code delivers nearly perfect strong scaling and weak
scaling (with up to 512 million particles) on up to 512 K20X GPUs on Oak Ridge
National Laboratory's (ORNL) Titan supercomputer. Moreover, a single-GPU
benchmark on ORNL's SummitDev and IBM's AC922 suggests that the host-to-device
NVLink can boost performance over PCIe by a remarkable 40\%. Lastly, we
demonstrate, through a flow simulation in realistic shale pores, that the CPU
counterpart requires 840 Power9 cores to rival the performance delivered by our
package with four V100 GPUs on ORNL's Summit architecture. This simulation
package enables quick-turnaround and high-throughput mesoscopic numerical
simulations for investigating complex flow phenomena in nano- to micro-porous
rocks with realistic pore geometries
A new gravitational N-body simulation algorithm for investigation of cosmological chaotic advection
Recently alternative approaches in cosmology seeks to explain the nature of
dark matter as a direct result of the non-linear spacetime curvature due to
different types of deformation potentials. In this context, a key test for this
hypothesis is to examine the effects of deformation on the evolution of large
scales structures. An important requirement for the fine analysis of this pure
gravitational signature (without dark matter elements) is to characterize the
position of a galaxy during its trajectory to the gravitational collapse of
super clusters at low redshifts. In this context, each element in an
gravitational N-body simulation behaves as a tracer of collapse governed by the
process known as chaotic advection (or lagrangian turbulence). In order to
develop a detailed study of this new approach we develop the COsmic LAgrangian
TUrbulence Simulator (COLATUS) to perform gravitational N-body simulations
based on Compute Unified Device Architecture (CUDA) for graphics processing
units (GPUs). In this paper we report the first robust results obtained from
COLATUS.Comment: Proceedings of Sixth International School on Field Theory and
Gravitation-2012 - by American Institute of Physic
Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit
Molecular dynamics (MD) simulation is a powerful computational tool to study
the behavior of macromolecular systems. But many simulations of this field are
limited in spatial or temporal scale by the available computational resource.
In recent years, graphics processing unit (GPU) provides unprecedented
computational power for scientific applications. Many MD algorithms suit with
the multithread nature of GPU. In this paper, MD algorithms for macromolecular
systems that run entirely on GPU are presented. Compared to the MD simulation
with free software GROMACS on a single CPU core, our codes achieve about 10
times speed-up on a single GPU. For validation, we have performed MD
simulations of polymer crystallization on GPU, and the results observed
perfectly agree with computations on CPU. Therefore, our single GPU codes have
already provided an inexpensive alternative for macromolecular simulations on
traditional CPU clusters and they can also be used as a basis to develop
parallel GPU programs to further speedup the computations.Comment: 21 pages, 16 figure
- …