195 research outputs found
Unified System for Processing Real and Simulated Data in the ATLAS Experiment
The physics goals of the next Large Hadron Collider run include high
precision tests of the Standard Model and searches for new physics. These goals
require detailed comparison of data with computational models simulating the
expected data behavior. To highlight the role which modeling and simulation
plays in future scientific discovery, we report on use cases and experience
with a unified system built to process both real and simulated data of growing
volume and variety.Comment: XVII International Conference Data Analytics and Management in Data
Intensive Domains (DAMDID/RCDL), Obninsk, Russia, October 13 - 16, 201
An intelligent Data Delivery Service for and beyond the ATLAS experiment
The intelligent Data Delivery Service (iDDS) has been developed to cope with
the huge increase of computing and storage resource usage in the coming LHC
data taking. iDDS has been designed to intelligently orchestrate workflow and
data management systems, decoupling data pre-processing, delivery, and main
processing in various workflows. It is an experiment-agnostic service around a
workflow-oriented structure to work with existing and emerging use cases in
ATLAS and other experiments. Here we will present the motivation for iDDS, its
design schema and architecture, use cases and current status, and plans for the
future.Comment: 6 pages, 5 figure
Measurement of low-energy antiproton detection efficiency in BESS below 1 GeV
An accelerator experiment was performed using a low-energy antiproton beam to
measure antiproton detection efficiency of BESS, a balloon-borne spectrometer
with a superconducting solenoid. Measured efficiencies showed good agreement
with calculated ones derived from the BESS Monte Carlo simulation based on
GEANT/GHEISHA. With detailed verification of the BESS simulation, the relative
systematic error of detection efficiency derived from the BESS simulation has
been determined to be 5%, compared with the previous estimation of
15% which was the dominant uncertainty for measurements of cosmic-ray
antiproton flux.Comment: 13 pages, 7 figure
Big Data Processing in the ATLAS Experiment: Use Cases and Experience
AbstractThe physics goals of the next Large Hadron Collider run include high precision tests of the Standard Model and searches for new physics. These goals require detailed comparison of data with computational models simulating the expected data behavior. To highlight the role which modeling and simulation plays in future scientific discovery, we report on use cases and experience with a unified system built to process both real and simulated data of growing volume and variety
Precise Measurement of Cosmic-Ray Proton and Helium Spectra with the BESS Spectrometer
We report cosmic-ray proton and helium spectra in energy ranges of 1 to 120
GeV and 1 to 54 GeV/nucleon, respectively, measured by a balloon flight of the
BESS spectrometer in 1998. The magnetic-rigidity of the cosmic-rays was
reliably determined by highly precise measurement of the circular track in a
uniform solenoidal magnetic field of 1 Tesla. Those spectra were determined
within overall uncertainties of +-5 % for protons and +- 10 % for helium nuclei
including statistical and systematic errors.Comment: 12 pages, 4 figure
Updates to the ATLAS Data Carousel Project
The High Luminosity upgrade to the LHC (HL-LHC) is expected to deliver scientific data at the multi-exabyte scale. In order to address this unprecedented data storage challenge, the ATLAS experiment launched the Data Carousel project in 2018. Data Carousel is a tape-driven workflow whereby bulk production campaigns with input data resident on tape are executed by staging and promptly processing a sliding window to disk buffer such that only a small fraction of inputs are pinned on disk at any one time. Data Carousel is now in production for ATLAS in Run3. In this paper, we provide updates on recent Data Carousel R&D projects, including data-on-demand and tape smart writing. Data-on-demand removes from disk data that has not been accessed for a predefined period, when users request them, they will be either staged from tape or recreated by following the original production steps. Tape smart writing employs intelligent algorithms for file placement on tape in order to retrieve data back more efficiently, which is our long term strategy to achieve optimal tape usage in Data Carousel
RBMX is a novel hepatic transcriptional regulator of SREBP-1c gene response to high-fructose diet
AbstractIn rodents a high-fructose diet induces metabolic derangements similar to those in metabolic syndrome. Previously we suggested that in mouse liver an unidentified nuclear protein binding to the sterol regulatory element (SRE)-binding protein-1c (SREBP-1c) promoter region plays a key role for the response to high-fructose diet. Here, using MALDI-TOF MASS technique, we identified an X-chromosome-linked RNA binding motif protein (RBMX) as a new candidate molecule. In electrophoretic mobility shift assay, anti-RBMX antibody displaced the bands induced by fructose-feeding. Overexpression or suppression of RBMX on rat hepatoma cells regulated the SREBP-1c promoter activity. RBMX may control SREBP-1c expression in mouse liver in response to high-fructose diet
PanDA: Evolution and Recent Trends in LHC Computing
AbstractThe Large Hadron Collider(LHC) is the world's largest and most powerful machine. It started operating in 2009 with a scientific program foreseen to extend over the next coming decades at increasing energies and luminosities to maximise the discovery potential. During Run1 (2009- 2013), the Worldwide LHC Computing Grid (WLCG) successfully delivered all the necessary computing resources, which made the discovery of the Higgs Boson possible. Looking ahead, it is forecasted that increased luminosities will extrapolate to a multiplicity in the storage and processing costs, which is not reflected in a corresponding funding growth of the WLCG. ATLAS, one of the four experiments at the LHC, is therefore leading an upgrade program to evolve their software and computing model to make the best possible usage of available resources, and also leverage on upcoming state of the art computing paradigms that could make important resource contributions.These proceedings will give an insight into the accompanying work in PanDA, ATLAS’ workload management system. PanDA has implemented event level bookkeeping and dynamic generation of jobs with tailored lengths, in order to integrate and optimise the usage of oppor- tunistic resources, e.g. Cloud Computing or High Performance Computing (HPC). In conjunc- tion, the Event Service has been developed as a way to manage fine grained jobs and its outputs. Usage examples on some of the leading commercial and research infrastructures will be given. In addition, we will describe the work on further exploiting the current network capabilities by allowing remote data access and reducing regional boundaries
Utilizing Distributed Heterogeneous Computing with PanDA in ATLAS
In recent years, advanced and complex analysis workflows have gained increasing importance in the ATLAS experiment at CERN, one of the large scientific experiments at LHC. Support for such workflows has allowed users to exploit remote computing resources and service providers distributed worldwide, overcoming limitations on local resources and services. The spectrum of computing options keeps increasing across the Worldwide LHC Computing Grid (WLCG), volunteer computing, high-performance computing, commercial clouds, and emerging service levels like Platform-as-a-Service (PaaS), Container-as-a-Service (CaaS) and Function-as-a-Service (FaaS), each one providing new advantages and constraints. Users can significantly benefit from these providers, but at the same time, it is cumbersome to deal with multiple providers, even in a single analysis workflow with fine-grained requirements coming from their applications’ nature and characteristics. In this paper, we will first highlight issues in geographically-distributed heterogeneous computing, such as the insulation of users from the complexities of dealing with remote providers, smart workload routing, complex resource provisioning, seamless execution of advanced workflows, workflow description, pseudointeractive analysis, and integration of PaaS, CaaS, and FaaS providers. We will also outline solutions developed in ATLAS with the Production and Distributed Analysis (PanDA) system and future challenges for LHC Run4
- …