30,629 research outputs found
Massively Parallel Computing at the Large Hadron Collider up to the HL-LHC
As the Large Hadron Collider (LHC) continues its upward progression in energy
and luminosity towards the planned High-Luminosity LHC (HL-LHC) in 2025, the
challenges of the experiments in processing increasingly complex events will
also continue to increase. Improvements in computing technologies and
algorithms will be a key part of the advances necessary to meet this challenge.
Parallel computing techniques, especially those using massively parallel
computing (MPC), promise to be a significant part of this effort. In these
proceedings, we discuss these algorithms in the specific context of a
particularly important problem: the reconstruction of charged particle tracks
in the trigger algorithms in an experiment, in which high computing performance
is critical for executing the track reconstruction in the available time. We
discuss some areas where parallel computing has already shown benefits to the
LHC experiments, and also demonstrate how a MPC-based trigger at the CMS
experiment could not only improve performance, but also extend the reach of the
CMS trigger system to capture events which are currently not practical to
reconstruct at the trigger level.Comment: 14 pages, 6 figures. Proceedings of 2nd International Summer School
on Intelligent Signal Processing for Frontier Research and Industry
(INFIERI2014), to appear in JINST. Revised version in response to referee
comment
Teaching the Grid: Learning Distributed Computing with the M-grid Framework
A classic challenge within Computer Science is to distribute data and processes so as to take advantage of multiple computers tackling a single problem in a simultaneous and coordinated way. This situation arises in a number of different scenarios, including Grid computing which is a secure, service-based architecture for tackling massively parallel problems and creating virtual organizations. Although the Grid seems destined to be an important part of the future computing landscape, it is very difficult to learn how to use as real Grid software requires extensive setting up and complex security processes. M-grid mimics the core features of the Grid, in a much simpler way, enabling the rapid prototyping of distributed applications. We describe m-grid and explore how it may be used to teach foundation Grid computing skills at the Higher Education level and report some of our experiences of deploying it as an exercise within a programming course
Recommended from our members
Massively parallel I/O: Building an infrastructure for parallel computing
The solution of Grand Challenge Problems will require computations that are too large to fit in the memories of even the largest machines. Inevitably, new designs of I/O systems will be necessary to support them. This report describes the work in investigating I/O subsystems for massively parallel computers. Specifically, the authors investigated out-of-core algorithms for common scientific calculations present several theoretical results. They also describe several approaches to parallel I/O, including partitioned secondary storage and choreographed I/O, and the implications of each to massively parallel computing
Research in Parallel Algorithms and Software for Computational Aerosciences
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors
Research in Parallel Algorithms and Software for Computational Aerosciences
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors
Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP) in a massively parallel supercomputing environment – a case study on JUQUEEN (IBM Blue Gene/Q)
Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing non-linear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm and require memory and load balancing considerations in the exchange of the coupling fields between different component models and allocation of computational resources, respectively. These considerations can be reached with advanced profiling and tracing tools leading to the efficient use of massively parallel computing environments, which is then mainly determined by the parallel performance of individual component models. However, the problem of model I/O and initialization in the peta-scale range requires major attention, because this constitutes a true big data challenge in the perspective of future exa-scale capabilities, which is unsolved
Evolutionary Neural Network Based Energy Consumption Forecast for Cloud Computing
The success of Hadoop, an open-source
framework for massively parallel and distributed computing, is
expected to drive energy consumption of cloud data centers to
new highs as service providers continue to add new
infrastructure, services and capabilities to meet the market
demands. While current research on data center airflow
management, HVAC (Heating, Ventilation and Air
Conditioning) system design, workload distribution and
optimization, and energy efficient computing hardware and
software are all contributing to improved energy efficiency,
energy forecast in cloud computing remains a challenge. This
paper reports an evolutionary computation based modeling
and forecasting approach to this problem. In particular, an
evolutionary neural network is developed and structurally
optimized to forecast the energy load of a cloud data center.
The results, both in terms of forecasting speed and accuracy,
suggest that the evolutionary neural network approach to
energy consumption forecasting for cloud computing is highly
promising
Big data scalability of bayesPhylogenies on Harvard’s ozone 12k cores
Computational Phylogenetics is classed as a grand challenge data driven problem in the fourth paradigm of
scientific discovery due to the exponential growth in genomic data, the computational challenge and the potential for vast impact on data driven biosciences. Petascale and Exascale computing offer the prospect of scaling
Phylogenetics to big data levels. However the computational complexity of even approximate Bayesian methods for phylogenetic inference requires scalable analysis for big data applications. There is limited study on
the scalability characteristics of existing computational models for petascale class massively parallel computers. In this paper we present strong and weak scaling performance analysis of BayesPhylogenies on Harvard’s
Ozone 12k cores. We perform evaluations on multiple data sizes to infer the scaling complexity and find that
strong scaling techniques along with novel methods for communication reduction are necessary if computational models are to overcome limitations on emerging complex parallel architectures with multiple levels of
concurrency. The results of this study can guide the design and implementation of scalable MCMC based
computational models for Bayesian inference on emerging petascale and exascale systems
- …