1,302 research outputs found
HEP Applications Evaluation of the EDG Testbed and Middleware
Workpackage 8 of the European Datagrid project was formed in January 2001
with representatives from the four LHC experiments, and with experiment
independent people from five of the six main EDG partners. In September 2002
WP8 was strengthened by the addition of effort from BaBar and D0. The original
mandate of WP8 was, following the definition of short- and long-term
requirements, to port experiment software to the EDG middleware and testbed
environment. A major additional activity has been testing the basic
functionality and performance of this environment. This paper reviews
experiences and evaluations in the areas of job submission, data management,
mass storage handling, information systems and monitoring. It also comments on
the problems of remote debugging, the portability of code, and scaling problems
with increasing numbers of jobs, sites and nodes. Reference is made to the
pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed
into their data challenges. A forward look is made to essential software
developments within EDG and to the necessary cooperation between EDG and LCG
for the LCG prototype due in mid 2003.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
Conference (CHEP03), La Jolla, CA, USA, March 2003, 7 pages. PSN THCT00
Enabling Technologies for Silicon Microstrip Tracking Detectors at the HL-LHC
While the tracking detectors of the ATLAS and CMS experiments have shown
excellent performance in Run 1 of LHC data taking, and are expected to continue
to do so during LHC operation at design luminosity, both experiments will have
to exchange their tracking systems when the LHC is upgraded to the
high-luminosity LHC (HL-LHC) around the year 2024. The new tracking systems
need to operate in an environment in which both the hit densities and the
radiation damage will be about an order of magnitude higher than today. In
addition, the new trackers need to contribute to the first level trigger in
order to maintain a high data-taking efficiency for the interesting processes.
Novel detector technologies have to be developed to meet these very challenging
goals. The German groups active in the upgrades of the ATLAS and CMS tracking
systems have formed a collaborative "Project on Enabling Technologies for
Silicon Microstrip Tracking Detectors at the HL-LHC" (PETTL), which was
supported by the Helmholtz Alliance "Physics at the Terascale" during the years
2013 and 2014. The aim of the project was to share experience and to work
together on key areas of mutual interest during the R&D phase of these
upgrades. The project concentrated on five areas, namely exchange of
experience, radiation hardness of silicon sensors, low mass system design,
automated precision assembly procedures, and irradiations. This report
summarizes the main achievements
HEP Community White Paper on Software trigger and event reconstruction
Realizing the physics programs of the planned and upgraded high-energy
physics (HEP) experiments over the next 10 years will require the HEP community
to address a number of challenges in the area of software and computing. For
this reason, the HEP software community has engaged in a planning process over
the past two years, with the objective of identifying and prioritizing the
research and development required to enable the next generation of HEP
detectors to fulfill their full physics potential. The aim is to produce a
Community White Paper which will describe the community strategy and a roadmap
for software and computing research and development in HEP for the 2020s. The
topics of event reconstruction and software triggers were considered by a joint
working group and are summarized together in this document.Comment: Editors Vladimir Vava Gligorov and David Lang
High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)
Computing plays an essential role in all aspects of high energy physics. As
computational technology evolves rapidly in new directions, and data throughput
and volume continue to follow a steep trend-line, it is important for the HEP
community to develop an effective response to a series of expected challenges.
In order to help shape the desired response, the HEP Forum for Computational
Excellence (HEP-FCE) initiated a roadmap planning activity with two key
overlapping drivers -- 1) software effectiveness, and 2) infrastructure and
expertise advancement. The HEP-FCE formed three working groups, 1) Applications
Software, 2) Software Libraries and Tools, and 3) Systems (including systems
software), to provide an overview of the current status of HEP computing and to
present findings and opportunities for the desired HEP computational roadmap.
The final versions of the reports are combined in this document, and are
presented along with introductory material.Comment: 72 page
The commissioning of CMS sites: improving the site reliability
The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all relevant aspects of a Grid site, such as the ability to efficiently use their network to transfer data, the functionality of all the site services relevant for CMS and the capability to sustain the various CMS computing workflows at the required scale. This contribution describes in detail the procedure to rate CMS sites depending on their performance, including the complete automation of the program, the description of monitoring tools, and its impact in improving the overall reliability of the Grid from the point of view of the CMS computing system
Persistent storage of non-event data in the CMS databases
In the CMS experiment, the non event data needed to set up the detector, or being produced by it, and needed to calibrate the physical responses of the detector itself are stored in ORACLE databases. The large amount of data to be stored, the number of clients involved and the performance requirements make the database system an essential service for the experiment to run. This note describes the CMS condition database architecture, the data-flow and PopCon, the tool built in order to populate the offline databases. Finally, the first results obtained during the 2008 and 2009 cosmic data taking are presented.In the CMS experiment, the non event data needed to set up the detector, or being produced by it, and needed to calibrate the physical responses of the detector itself are stored in ORACLE databases. The large amount of data to be stored, the number of clients involved and the performance requirements make the database system an essential service for the experiment to run. This note describes the CMS condition database architecture, the data-flow and PopCon, the tool built in order to populate the offline databases. Finally, the first experience obtained during the 2008 and 2009 cosmic data taking are presented
- …