163,211 research outputs found
The NorduGrid architecture and tools
The NorduGrid project designed a Grid architecture with the primary goal to
meet the requirements of production tasks of the LHC experiments. While it is
meant to be a rather generic Grid system, it puts emphasis on batch processing
suitable for problems encountered in High Energy Physics. The NorduGrid
architecture implementation uses the \globus{} as the foundation for various
components, developed by the project. While introducing new services, the
NorduGrid does not modify the Globus tools, such that the two can eventually
co-exist. The NorduGrid topology is decentralized, avoiding a single point of
failure. The NorduGrid architecture is thus a light-weight, non-invasive and
dynamic one, while robust and scalable, capable of meeting most challenging
tasks of High Energy Physics.Comment: Talk from the 2003 Computing in High Energy Physics and Nuclear
Physics (CHEP03), La Jolla, Ca, USA, March 2003, 9 pages,LaTeX, 4 figures.
PSN MOAT00
Many-core applications to online track reconstruction in HEP experiments
Interest in parallel architectures applied to real time selections is growing
in High Energy Physics (HEP) experiments. In this paper we describe performance
measurements of Graphic Processing Units (GPUs) and Intel Many Integrated Core
architecture (MIC) when applied to a typical HEP online task: the selection of
events based on the trajectories of charged particles. We use as benchmark a
scaled-up version of the algorithm used at CDF experiment at Tevatron for
online track reconstruction - the SVT algorithm - as a realistic test-case for
low-latency trigger systems using new computing architectures for LHC
experiment. We examine the complexity/performance trade-off in porting existing
serial algorithms to many-core devices. Measurements of both data processing
and data transfer latency are shown, considering different I/O strategies
to/from the parallel devices.Comment: Proceedings for the 20th International Conference on Computing in
High Energy and Nuclear Physics (CHEP); missing acks adde
Edge-Enabled Metaverse: The Convergence of Metaverse and Mobile Edge Computing
Metaverse is a virtual environment where users are represented by their avatars to navigate a virtual world having strong links with its physical counterpart. The state-of-the-art Metaverse architectures rely on a cloud-based approach for avatar physics emulation and graphics rendering computation. The current centralized architecture of such systems is unfavorable as it suffers from several drawbacks caused by the long latency of cloud access, such as low-quality visualization. To this end, we propose a Fog-Edge hybrid computing architecture for Metaverse applications that leverage an edge-enabled distributed computing paradigm. Metaverse applications leverage edge devices' computing power to perform the required computations for heavy tasks, such as collision detection in the virtual universe and high-computational 3D physics in virtual simulations. The computational costs of a Metaverse entity, such as collision detection or physics emulation, are performed at the device of the associated physical entity. To validate the effectiveness of the proposed architecture, we simulate a distributed social Metaverse application. The simulation results show that the proposed architecture can reduce the latency by 50% when compared with cloud-based Metaverse applications
GPU-based Real-time Triggering in the NA62 Experiment
Over the last few years the GPGPU (General-Purpose computing on Graphics
Processing Units) paradigm represented a remarkable development in the world of
computing. Computing for High-Energy Physics is no exception: several works
have demonstrated the effectiveness of the integration of GPU-based systems in
high level trigger of different experiments. On the other hand the use of GPUs
in the low level trigger systems, characterized by stringent real-time
constraints, such as tight time budget and high throughput, poses several
challenges. In this paper we focus on the low level trigger in the CERN NA62
experiment, investigating the use of real-time computing on GPUs in this
synchronous system. Our approach aimed at harvesting the GPU computing power to
build in real-time refined physics-related trigger primitives for the RICH
detector, as the the knowledge of Cerenkov rings parameters allows to build
stringent conditions for data selection at trigger level. Latencies of all
components of the trigger chain have been analyzed, pointing out that
networking is the most critical one. To keep the latency of data transfer task
under control, we devised NaNet, an FPGA-based PCIe Network Interface Card
(NIC) with GPUDirect capabilities. For the processing task, we developed
specific multiple ring trigger algorithms to leverage the parallel architecture
of GPUs and increase the processing throughput to keep up with the high event
rate. Results obtained during the first months of 2016 NA62 run are presented
and discussed
Scalability tests of R-GMA-based grid job monitoring system for CMS Monte Carlo data production
Copyright @ 2004 IEEEHigh-energy physics experiments, such as the compact muon solenoid (CMS) at the large hadron collider (LHC), have large-scale data processing computing requirements. The grid has been chosen as the solution. One important challenge when using the grid for large-scale data processing is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. The relational grid monitoring architecture (R-GMA) is a monitoring and information management service for distributed resources based on the GMA of the Global Grid Forum. We report on the first measurements of R-GMA as part of a monitoring architecture to be used for batch submission of multiple Monte Carlo simulation jobs running on a CMS-specific LHC computing grid test bed. Monitoring information was transferred in real time from remote execution nodes back to the submitting host and stored in a database. In scalability tests, the job submission rates supported by successive releases of R-GMA improved significantly, approaching that expected in full-scale production
The Use of HepRep in GLAST
HepRep is a generic, hierarchical format for description of graphics
representables that can be augmented by physics information and relational
properties. It was developed for high energy physics event display applications
and is especially suited to client/server or component frameworks. The GLAST
experiment, an international effort led by NASA for a gamma-ray telescope to
launch in 2006, chose HepRep to provide a flexible, extensible and maintainable
framework for their event display without tying their users to any one graphics
application. To support HepRep in their GUADI infrastructure, GLAST developed a
HepRep filler and builder architecture. The architecture hides the details of
XML and CORBA in a set of base and helper classes allowing physics experts to
focus on what data they want to represent. GLAST has two GAUDI services:
HepRepSvc, which registers HepRep fillers in a global registry and allows the
HepRep to be exported to XML, and CorbaSvc, which allows the HepRep to be
published through a CORBA interface and which allows the client application to
feed commands back to GAUDI (such as start next event, or run some GAUDI
algorithm). GLAST's HepRep solution gives users a choice of client
applications, WIRED (written in Java) or FRED (written in C++ and Ruby), and
leaves them free to move to any future HepRep-compliant event display.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 9 pages pdf, 15 figures. PSN THLT00
- …