3,194 research outputs found
21st Century Simulation: Exploiting High Performance Computing and Data Analysis
This paper identifies, defines, and analyzes the limitations imposed on Modeling and Simulation by outmoded
paradigms in computer utilization and data analysis. The authors then discuss two emerging capabilities to
overcome these limitations: High Performance Parallel Computing and Advanced Data Analysis. First, parallel
computing, in supercomputers and Linux clusters, has proven effective by providing users an advantage in
computing power. This has been characterized as a ten-year lead over the use of single-processor computers.
Second, advanced data analysis techniques are both necessitated and enabled by this leap in computing power.
JFCOM's JESPP project is one of the few simulation initiatives to effectively embrace these concepts. The
challenges facing the defense analyst today have grown to include the need to consider operations among non-combatant
populations, to focus on impacts to civilian infrastructure, to differentiate combatants from non-combatants,
and to understand non-linear, asymmetric warfare. These requirements stretch both current
computational techniques and data analysis methodologies. In this paper, documented examples and potential
solutions will be advanced. The authors discuss the paths to successful implementation based on their experience.
Reviewed technologies include parallel computing, cluster computing, grid computing, data logging, OpsResearch,
database advances, data mining, evolutionary computing, genetic algorithms, and Monte Carlo sensitivity analyses.
The modeling and simulation community has significant potential to provide more opportunities for training and
analysis. Simulations must include increasingly sophisticated environments, better emulations of foes, and more
realistic civilian populations. Overcoming the implementation challenges will produce dramatically better insights,
for trainees and analysts. High Performance Parallel Computing and Advanced Data Analysis promise increased
understanding of future vulnerabilities to help avoid unneeded mission failures and unacceptable personnel losses.
The authors set forth road maps for rapid prototyping and adoption of advanced capabilities. They discuss the
beneficial impact of embracing these technologies, as well as risk mitigation required to ensure success
Enable High-resolution, Real-time Ensemble Simulation and Data Assimilation of Flood Inundation using Distributed GPU Parallelization
Numerical modeling of the intensity and evolution of flood events are
affected by multiple sources of uncertainty such as precipitation and land
surface conditions. To quantify and curb these uncertainties, an ensemble-based
simulation and data assimilation model for pluvial flood inundation is
constructed. The shallow water equation is decoupled in the x and y directions,
and the inertial form of the Saint-Venant equation is chosen to realize fast
computation. The probability distribution of the input and output factors is
described using Monte Carlo samples. Subsequently, a particle filter is
incorporated to enable the assimilation of hydrological observations and
improve prediction accuracy. To achieve high-resolution, real-time ensemble
simulation, heterogeneous computing technologies based on CUDA (compute unified
device architecture) and a distributed storage multi-GPU (graphics processing
unit) system are used. Multiple optimization skills are employed to ensure the
parallel efficiency and scalability of the simulation program. Taking an urban
area of Fuzhou, China as an example, a model with a 3-m spatial resolution and
4.0 million units is constructed, and 8 Tesla P100 GPUs are used for the
parallel calculation of 96 model instances. Under these settings, the ensemble
simulation of a 1-hour hydraulic process takes 2.0 minutes, which achieves a
2680 estimated speedup compared with a single-thread run on CPU. The
calculation results indicate that the particle filter method effectively
constrains simulation uncertainty while providing the confidence intervals of
key hydrological elements such as streamflow, submerged area, and submerged
water depth. The presented approaches show promising capabilities in handling
the uncertainties in flood modeling as well as enhancing prediction efficiency
QarSUMO: A Parallel, Congestion-optimized Traffic Simulator
Traffic simulators are important tools for tasks such as urban planning and
transportation management. Microscopic simulators allow per-vehicle movement
simulation, but require longer simulation time. The simulation overhead is
exacerbated when there is traffic congestion and most vehicles move slowly.
This in particular hurts the productivity of emerging urban computing studies
based on reinforcement learning, where traffic simulations are heavily and
repeatedly used for designing policies to optimize traffic related tasks.
In this paper, we develop QarSUMO, a parallel, congestion-optimized version
of the popular SUMO open-source traffic simulator. QarSUMO performs high-level
parallelization on top of SUMO, to utilize powerful multi-core servers and
enables future extension to multi-node parallel simulation if necessary. The
proposed design, while partly sacrificing speedup, makes QarSUMO compatible
with future SUMO improvements. We further contribute such an improvement by
modifying the SUMO simulation engine for congestion scenarios where the update
computation of consecutive and slow-moving vehicles can be simplified.
We evaluate QarSUMO with both real-world and synthetic road network and
traffic data, and examine its execution time as well as simulation accuracy
relative to the original, sequential SUMO
Soft Computing Techiniques for the Protein Folding Problem on High Performance Computing Architectures
The protein-folding problem has been extensively studied during the last
fifty years. The understanding of the dynamics of global shape of a protein and the influence
on its biological function can help us to discover new and more effective
drugs to deal with diseases of pharmacological relevance. Different computational approaches
have been developed by different researchers in order to foresee the threedimensional
arrangement of atoms of proteins from their sequences. However, the
computational complexity of this problem makes mandatory the search for new models,
novel algorithmic strategies and hardware platforms that provide solutions in a
reasonable time frame. We present in this revision work the past and last tendencies
regarding protein folding simulations from both perspectives; hardware and software.
Of particular interest to us are both the use of inexact solutions to this computationally hard problem as
well as which hardware platforms have been used for running this kind of Soft Computing techniques.This work is jointly supported by the FundaciónSéneca (Agencia Regional de Ciencia y TecnologÃa, Región de Murcia) under grants 15290/PI/2010 and 18946/JLI/13, by the Spanish MEC and European Commission FEDER under grant with reference TEC2012-37945-C02-02 and TIN2012-31345, by the Nils Coordinated Mobility under grant 012-ABEL-CM-2014A, in part financed by the European Regional Development Fund (ERDF). We also thank NVIDIA for hardware donation within UCAM GPU educational and research centers.IngenierÃa, Industria y Construcció
A Survey of Monte Carlo Tree Search Methods
Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work
- …