537 research outputs found
Exploring compression techniques for ROOT IO
ROOT provides an flexible format used throughout the HEP community. The
number of use cases - from an archival data format to end-stage analysis - has
required a number of tradeoffs to be exposed to the user. For example, a high
"compression level" in the traditional DEFLATE algorithm will result in a
smaller file (saving disk space) at the cost of slower decompression (costing
CPU time when read). At the scale of the LHC experiment, poor design choices
can result in terabytes of wasted space or wasted CPU time. We explore and
attempt to quantify some of these tradeoffs. Specifically, we explore: the use
of alternate compressing algorithms to optimize for read performance; an
alternate method of compressing individual events to allow efficient random
access; and a new approach to whole-file compression. Quantitative results are
given, as well as guidance on how to make compression decisions for different
use cases.Comment: Proceedings for 22nd International Conference on Computing in High
Energy and Nuclear Physics (CHEP 2016
Continuous Performance Benchmarking Framework for ROOT
Foundational software libraries such as ROOT are under intense pressure to
avoid software regression, including performance regressions. Continuous
performance benchmarking, as a part of continuous integration and other code
quality testing, is an industry best-practice to understand how the performance
of a software product evolves over time. We present a framework, built from
industry best practices and tools, to help to understand ROOT code performance
and monitor the efficiency of the code for a several processor architectures.
It additionally allows historical performance measurements for ROOT I/O,
vectorization and parallelization sub-systems.Comment: 8 pages, 5 figures, CHEP 2018 - 23rd International Conference on
Computing in High Energy and Nuclear Physic
Discovering Job Preemptions in the Open Science Grid
The Open Science Grid(OSG) is a world-wide computing system which facilitates
distributed computing for scientific research. It can distribute a
computationally intensive job to geo-distributed clusters and process job's
tasks in parallel. For compute clusters on the OSG, physical resources may be
shared between OSG and cluster's local user-submitted jobs, with local jobs
preempting OSG-based ones. As a result, job preemptions occur frequently in
OSG, sometimes significantly delaying job completion time.
We have collected job data from OSG over a period of more than 80 days. We
present an analysis of the data, characterizing the preemption patterns and
different types of jobs. Based on observations, we have grouped OSG jobs into 5
categories and analyze the runtime statistics for each category. we further
choose different statistical distributions to estimate probability density
function of job runtime for different classes.Comment: 8 page
Extending ROOT through Modules
The ROOT software framework is foundational for the HEP ecosystem, providing
capabilities such as IO, a C++ interpreter, GUI, and math libraries. It uses
object-oriented concepts and build-time components to layer between them. We
believe additional layering formalisms will benefit ROOT and its users. We
present the modularization strategy for ROOT which aims to formalize the
description of existing source components, making available the dependencies
and other metadata externally from the build system, and allow post-install
additions of functionality in the runtime environment. components can then be
grouped into packages, installable from external repositories to deliver
post-install step of missing packages. This provides a mechanism for the wider
software ecosystem to interact with a minimalistic install. Reducing
intra-component dependencies improves maintainability and code hygiene. We
believe helping maintain the smallest "base install" possible will help
embedding use cases. The modularization effort draws inspiration from the Java,
Python, and Swift ecosystems. Keeping aligned with the modern C++, this
strategy relies on forthcoming features such as C++ modules. We hope
formalizing the component layer will provide simpler ROOT installs, improve
extensibility, and decrease the complexity of embedding in other ecosystemsComment: 8 pages, 2 figures, 1 listing, CHEP 2018 - 23rd International
Conference on Computing in High Energy and Nuclear Physic
The Effectiveness of a Dual Task Concussion Assessment for Identifying Impairments in Concussed Athletes
Context: Commonly used single task (ST) concussion assessments are unable to identify lingering impairments following a concussion. Current dual task (DT) assessments use cost prohibitive technological assessments not available to most clinicians, creating the need for a clinically applicable dual task assessment to identify impairments. Objective: To determine if a DT assessment consisting of the Standardized Assessment of Concussion (SAC) and with the Balance Error Scoring System (BESS) is able to identify impairments. Design: Prospective longitudinal. Setting: Research laboratory. Participants: Concussed student-athletes and matched health controls, 18 females, 10 males. Concussed group: age 19.00±0.88, height 174.53±12.06 cm, and mass 75.28±22.02 kg. Healthy group: age 19.36±1.34, height 171.45±11.69 cm, mass 73.34±22.7 kg. Participants were matched based upon gender, mass, and sport. Interventions: The DT assessment was administered on the day of recovery (REC), on the day of return to play (RTP), and 30 days post-concussion (D30). Main Outcome Measures: Scores of SAC and BESS as a dual task. Results: No significant interaction or main effect was found between session and group for BESS. No significant interaction between session and status was found for SAC. There was a significant main effect found for session for SAC. Simple contrasts revealed significant differences between recovery REC and D30, and between RTP and D30. The SAC D30 scores were significantly higher (better) than recovery and return to play day, regardless of group. No significant differences were found between concussed and healthy for SAC at REC, RTP, and D30. Conclusions: There were no differences between recently concussed and healthy participants when performing the BESS and SAC as a DT challenge. Interestingly, an improvement in cognitive performance was identified whereby all participants improved SAC performance with repeat administration. Conversely, no improvements were noted with repeat performance suggesting a posture first strategy was not being employed. Future research should utilize tasks that challenge both the cognitive and postural domains, but is also plausible and feasible for clinicians to utilize
Designing Computing System Architecture and Models for the HL-LHC era
This paper describes a programme to study the computing model in CMS after
the next long shutdown near the end of the decade.Comment: Submitted to proceedings of the 21st International Conference on
Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa, Japa
Data Access for LIGO on the OSG
During 2015 and 2016, the Laser Interferometer Gravitational-Wave Observatory
(LIGO) conducted a three-month observing campaign. These observations delivered
the first direct detection of gravitational waves from binary black hole
mergers. To search for these signals, the LIGO Scientific Collaboration uses
the PyCBC search pipeline. To deliver science results in a timely manner, LIGO
collaborated with the Open Science Grid (OSG) to distribute the required
computation across a series of dedicated, opportunistic, and allocated
resources. To deliver the petabytes necessary for such a large-scale
computation, our team deployed a distributed data access infrastructure based
on the XRootD server suite and the CernVM File System (CVMFS). This data access
strategy grew from simply accessing remote storage to a POSIX-based interface
underpinned by distributed, secure caches across the OSG.Comment: 6 pages, 3 figures, submitted to PEARC1
- …