42 research outputs found
Challenges and opportunities integrating LLAMA into AdePT
Particle transport simulations are a cornerstone of high-energy physics
(HEP), constituting a substantial part of the computing workload performed in
HEP. To boost the simulation throughput and energy efficiency, GPUs as
accelerators have been explored in recent years, further driven by the
increasing use of GPUs on HPCs. The Accelerated demonstrator of electromagnetic
Particle Transport (AdePT) is an advanced prototype for offloading the
simulation of electromagnetic showers in Geant4 to GPUs, and still undergoes
continuous development and optimization. Improving memory layout and data
access is vital to use modern, massively parallel GPU hardware efficiently,
contributing to the challenge of migrating traditional CPU based data
structures to GPUs in AdePT. The low-level abstraction of memory access (LLAMA)
is a C++ library that provides a zero-runtime-overhead data structure
abstraction layer, focusing on multidimensional arrays of nested, structured
data. It provides a framework for defining and switching custom memory mappings
at compile time to define data layouts and instrument data access, making LLAMA
an ideal tool to tackle the memory-related optimization challenges in AdePT.
Our contribution shares insights gained with LLAMA when instrumenting data
access inside AdePT, complementing traditional GPU profiler outputs. We
demonstrate traces of read/write counts to data structure elements as well as
memory heatmaps. The acquired knowledge allowed for subsequent data layout
optimizations
Increasing Parallelism in the ROOT I/O Subsystem
When processing large amounts of data, the rate at which reading and writing
can take place is a critical factor. High energy physics data processing
relying on ROOT is no exception. The recent parallelisation of LHC experiments'
software frameworks and the analysis of the ever increasing amount of collision
data collected by experiments further emphasized this issue underlying the need
of increasing the implicit parallelism expressed within the ROOT I/O. In this
contribution we highlight the improvements of the ROOT I/O subsystem which
targeted a satisfactory scaling behaviour in a multithreaded context. The
effect of parallelism on the individual steps which are chained by ROOT to read
and write data, namely (de)compression, (de)serialisation, access to storage
backend, are discussed. Performance measurements are discussed through real
life examples coming from CMS production workflows on traditional server
platforms and highly parallel architectures such as Intel Xeon Phi
XRootD Client: A robust technology for LHC Run-3 and beyond
During the LHC era the XRootD framework has proven to be a critical component of numerous data management and software defined storage solutions (most importantly EOS, the CERN storage technology used for the LHC experiments), and as such grew into one of the most strategic storage technologies in the High Energy Physics (HEP) community. Over the last year significant developments in the area of the XRootD client have been introduced, making it even more reliable and robust, as well as easier to debug. Here, we present an overview of the new XRootD client and its main features, namely, support for erasure coding, in-flight data integrity checks, and the new record plug-in and replay tool that allow to record an I/O pattern and then replay it for debugging or benchmarking purposes
Software Challenges For HL-LHC Data Analysis
The high energy physics community is discussing where investment is needed to
prepare software for the HL-LHC and its unprecedented challenges. The ROOT
project is one of the central software players in high energy physics since
decades. From its experience and expectations, the ROOT team has distilled a
comprehensive set of areas that should see research and development in the
context of data analysis software, for making best use of HL-LHC's physics
potential. This work shows what these areas could be, why the ROOT team
believes investing in them is needed, which gains are expected, and where
related work is ongoing. It can serve as an indication for future research
proposals and cooperations
Eccentric strength assessment of hamstring muscles with new technologies: a systematic review of current methods and clinical implications
Background: Given the severe economic and performance implications of hamstring injuries, there are different attempts to identify their risk factors for subsequently developing injury prevention strategies to reduce the risk of these injuries. One of the strategies reported in the scientific literature is the application of interventions with eccentric exercises. To verify the effectiveness of these interventions, different eccentric strength measurements have been used with low-cost devices as alternatives to the widespread used isokinetic dynamometers and the technically limited handheld dynamometers. Therefore, the purpose of the present systematic review was to summarize the findings of the scientific literature related to the evaluation of eccentric strength of hamstring muscles with these new technologies.
Methods: Systematic searches through the PubMed, Scopus, and Web of Science databases, from inception up to April 2020, were conducted for peer reviewed articles written in English, reporting eccentric strength of hamstrings assessed by devices, different to isokinetic and handheld dynamometers, in athletes.
Results: Seventeen studies were finally included in the review with 4 different devices used and 18 parameters identified. The pooled sample consisted of 2893 participants (97% male and 3% female: 22 ± 4 years). The parameters most used were peak force (highest and average), peak torque (average and highest), and between-limb imbalance (left-to-right limb ratio). There is inconsistency regarding the association between eccentric hamstrings strength and both injury risk and athletic performance. There is no standardized definition or standardization of the calculation of the used parameters.
Conclusions: The current evidence is insufficient to recommend a practical guide for sports professionals to use these new technologies in their daily routine, due to the need for standardized definitions and calculations. Furthermore, more studies with female athletes are warranted. Despite these limitations, the eccentric strength of hamstring muscles assessed by different devices may be recommended for monitoring the neuromuscular status of athletes
Eccentric Strength Assessment of Hamstring Muscles with New Technologies: a Systematic Review of Current Methods and Clinical Implications
[EN] Background: Given the severe economic and performance implications of hamstring injuries, there are different
attempts to identify their risk factors for subsequently developing injury prevention strategies to reduce the risk of
these injuries. One of the strategies reported in the scientific literature is the application of interventions with
eccentric exercises. To verify the effectiveness of these interventions, different eccentric strength measurements
have been used with low-cost devices as alternatives to the widespread used isokinetic dynamometers and the
technically limited handheld dynamometers. Therefore, the purpose of the present systematic review was to
summarize the findings of the scientific literature related to the evaluation of eccentric strength of hamstring
muscles with these new technologies.
Methods: Systematic searches through the PubMed, Scopus, and Web of Science databases, from inception up to
April 2020, were conducted for peer reviewed articles written in English, reporting eccentric strength of hamstrings
assessed by devices, different to isokinetic and handheld dynamometers, in athletes.
Results: Seventeen studies were finally included in the review with 4 different devices used and 18 parameters
identified. The pooled sample consisted of 2893 participants (97% male and 3% female: 22 ± 4 years). The
parameters most used were peak force (highest and average), peak torque (average and highest), and between-
limb imbalance (left-to-right limb ratio). There is inconsistency regarding the association between eccentric
hamstrings strength and both injury risk and athletic performance. There is no standardized definition or
standardization of the calculation of the used parameters.
Conclusions: The current evidence is insufficient to recommend a practical guide for sports professionals to use
these new technologies in their daily routine, due to the need for standardized definitions and calculations.
Furthermore, more studies with female athletes are warranted. Despite these limitations, the eccentric strength of
hamstring muscles assessed by different devices may be recommended for monitoring the neuromuscular status of
athlete
EOS software evolution enabling LHC Run 3
EOS has been the main storage system at CERN for more than a decade, continuously improving in order to meet the ever evolving requirements of the LHC experiments and the whole physics user community. In order to satisfy the demands of LHC Run-3, in terms of storage performance and tradeoff between cost and capacity, EOS was enhanced with a set of new functionalities and features that we will detail in this paper.
First of all, we describe the use of erasure coded layouts in a large-scale deployment which enables an efficient use of available storage capacity, while at the same time providing end-users with better throughput when accessing their data. This new operating model implies more coupling between the machines in a cluster, which in turn leads to the next set of EOS improvements that we discuss, targeting I/O traffic shaping, better I/O scheduling policies and tagged traffic prioritization. Increasing the size of the EOS clusters to cope with experiment demands, means stringent constraints on the data integrity and durability that we addressed by a re-designed consistency check engine. Another focus area of EOS development was to minimize the operational load by making the internal operational procedures (draining, balancing or conversions) more robust and efficient, to allow managing easily multiple clusters and avoid possible scaling issues.
All these improvements available in the EOS 5 release series, are coupled with the new XRootD 5 framework which brings additional security features like TLS support and optimizations for large data transfers like page read and page write functionalities. Last but not least, the area of authentication/authorization methods has seen important developments by adding support for different types of bearer tokens that we will describe along with EOS specific token extensions. We conclude by highlighting potential areas of the EOS architecture that might require further developments or re-design in order to cope with the ever-increasing demands of our end-users
Operation of the CERN disk storage infrastructure during LHC Run-3
The CERN IT Storage group operates multiple distributed storage systems to support all CERN data storage requirements. The storage and distribution of physics data generated by LHC and non-LHC experiments is one of the biggest challenges the group has to take on during LHC Run-3.EOS [1], the CERN distributed disk storage system is playing a key role in LHC data-taking. During the first ten months of 2022, more than 440PB have been written by the experiments and 2.9EB have been read out. The data storage requirements of LHC Run-3 are higher than what was previously delivered. The storage operations team has started investigating multiple areas to upgrade and optimize the current storage resources. A new, dedicated and redundant EOS infrastructure based on 100Gbit servers was installed, commissioned and deployed for the ALICE Online and Offline (O2) project. This cluster can sustain high-throughput data transfer between the ALICE Event Processing Nodes (EPN) and the CERN’s data center.This paper will present the architecture, techniques and workflows in place allowing EOS to deliver fast, reliable and scalable data storage to meet experiment needs during LHC Run-3 and beyond
A Roadmap for HEP Software and Computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe