11,146 research outputs found
Massively Parallel Computing and the Search for Jets and Black Holes at the LHC
Massively parallel computing at the LHC could be the next leap necessary to
reach an era of new discoveries at the LHC after the Higgs discovery.
Scientific computing is a critical component of the LHC experiment, including
operation, trigger, LHC computing GRID, simulation, and analysis. One way to
improve the physics reach of the LHC is to take advantage of the flexibility of
the trigger system by integrating coprocessors based on Graphics Processing
Units (GPUs) or the Many Integrated Core (MIC) architecture into its server
farm. This cutting edge technology provides not only the means to accelerate
existing algorithms, but also the opportunity to develop new algorithms that
select events in the trigger that previously would have evaded detection. In
this article we describe new algorithms that would allow to select in the
trigger new topological signatures that include non-prompt jet and black
hole--like objects in the silicon tracker.Comment: 15 pages, 11 figures, submitted to NIM
Multi-GPU Graph Analytics
We present a single-node, multi-GPU programmable graph processing library
that allows programmers to easily extend single-GPU graph algorithms to achieve
scalable performance on large graphs with billions of edges. Directly using the
single-GPU implementations, our design only requires programmers to specify a
few algorithm-dependent concerns, hiding most multi-GPU related implementation
details. We analyze the theoretical and practical limits to scalability in the
context of varying graph primitives and datasets. We describe several
optimizations, such as direction optimizing traversal, and a just-enough memory
allocation scheme, for better performance and smaller memory consumption.
Compared to previous work, we achieve best-of-class performance across
operations and datasets, including excellent strong and weak scalability on
most primitives as we increase the number of GPUs in the system.Comment: 12 pages. Final version submitted to IPDPS 201
Local Ensemble Transform Kalman Filter: a non-stationary control law for complex adaptive optics systems on ELTs
We propose a new algorithm for an adaptive optics system control law which
allows to reduce the computational burden in the case of an Extremely Large
Telescope (ELT) and to deal with non-stationary behaviors of the turbulence.
This approach, using Ensemble Transform Kalman Filter and localizations by
domain decomposition is called the local ETKF: the pupil of the telescope is
split up into various local domains and calculations for the update estimate of
the turbulent phase on each domain are performed independently. This data
assimilation scheme enables parallel computation of markedly less data during
this update step. This adapts the Kalman Filter to large scale systems with a
non-stationary turbulence model when the explicit storage and manipulation of
extremely large covariance matrices are impossible. First simulation results
are given in order to assess the theoretical analysis and to demonstrate the
potentiality of this new control law for complex adaptive optics systems on
ELTs.Comment: Proceedings of the AO4ELT3 conference; 8 pages, 3 figure
Gunrock: GPU Graph Analytics
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs, have presented two
significant challenges to developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We characterize the performance of
various optimization strategies and evaluate Gunrock's overall performance on
different GPU architectures on a wide range of graph primitives that span from
traversal-based algorithms and ranking algorithms, to triangle counting and
bipartite-graph-based algorithms. The results show that on a single GPU,
Gunrock has on average at least an order of magnitude speedup over Boost and
PowerGraph, comparable performance to the fastest GPU hardwired primitives and
CPU shared-memory graph libraries such as Ligra and Galois, and better
performance than any other GPU high-level graph library.Comment: 52 pages, invited paper to ACM Transactions on Parallel Computing
(TOPC), an extended version of PPoPP'16 paper "Gunrock: A High-Performance
Graph Processing Library on the GPU
Massively parallel implicit equal-weights particle filter for ocean drift trajectory forecasting
Forecasting of ocean drift trajectories are important for many applications, including search and rescue operations, oil spill cleanup and iceberg risk mitigation. In an operational setting, forecasts of drift trajectories are produced based on computationally demanding forecasts of three-dimensional ocean currents. Herein, we investigate a complementary approach for shorter time scales by using the recently proposed two-stage implicit equal-weights particle filter applied to a simplified ocean model. To achieve this, we present a new algorithmic design for a data-assimilation system in which all components – including the model, model errors, and particle filter – take advantage of massively parallel compute architectures, such as graphical processing units. Faster computations can enable in-situ and ad-hoc model runs for emergency management, and larger ensembles for better uncertainty quantification. Using a challenging test case with near-realistic chaotic instabilities, we run data-assimilation experiments based on synthetic observations from drifting and moored buoys, and analyze the trajectory forecasts for the drifters. Our results show that even sparse drifter observations are sufficient to significantly improve short-term drift forecasts up to twelve hours. With equidistant moored buoys observing only 0.1% of the state space, the ensemble gives an accurate description of the true state after data assimilation followed by a high-quality probabilistic forecast
Gunrock: A High-Performance Graph Processing Library on the GPU
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs have been two
significant challenges for developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We evaluate Gunrock on five key graph
primitives and show that Gunrock has on average at least an order of magnitude
speedup over Boost and PowerGraph, comparable performance to the fastest GPU
hardwired primitives, and better performance than any other GPU high-level
graph library.Comment: 14 pages, accepted by PPoPP'16 (removed the text repetition in the
previous version v5
- …