4,083 research outputs found
CMS software and computing for LHC Run 2
The CMS offline software and computing system has successfully met the
challenge of LHC Run 2. In this presentation, we will discuss how the entire
system was improved in anticipation of increased trigger output rate, increased
rate of pileup interactions and the evolution of computing technology. The
primary goals behind these changes was to increase the flexibility of computing
facilities where ever possible, as to increase our operational efficiency, and
to decrease the computing resources needed to accomplish the primary offline
computing workflows. These changes have resulted in a new approach to
distributed computing in CMS for Run 2 and for the future as the LHC luminosity
should continue to increase. We will discuss changes and plans to our data
federation, which was one of the key changes towards a more flexible computing
model for Run 2. Our software framework and algorithms also underwent
significant changes. We will summarize the our experience with a new
multi-threaded framework as deployed on our prompt reconstruction farm for 2015
and across the CMS WLCG Tier-1 facilities. We will discuss our experience with
a analysis data format which is ten times smaller than our primary Run 1
format. This "miniAOD" format has proven to be easier to analyze while be
extremely flexible for analysts. Finally, we describe improvements to our
workflow management system that have resulted in increased automation and
reliability for all facets of CMS production and user analysis operations.Comment: Contribution to proceedings of the 38th International Conference on
High Energy Physics (ICHEP 2016
The CMS Computing System: Successes and Challenges
Each LHC experiment will produce datasets with sizes of order one petabyte
per year. All of this data must be stored, processed, transferred, simulated
and analyzed, which requires a computing system of a larger scale than ever
mounted for any particle physics experiment, and possibly for any enterprise in
the world. I discuss how CMS has chosen to address these challenges, focusing
on recent tests of the system that demonstrate the experiment's readiness for
producing physics results with the first LHC data.Comment: To be published in the proceedings of DPF-2009, Detroit, MI, July
2009, eConf C09072
CMS Software and Computing: Ready for Run 2
In Run 1 of the Large Hadron Collider, software and computing was a strategic
strength of the Compact Muon Solenoid experiment. The timely processing of data
and simulation samples and the excellent performance of the reconstruction
algorithms played an important role in the preparation of the full suite of
searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC
will run at higher intensities and CMS will record data at a higher trigger
rate. These new running conditions will provide new challenges for the software
and computing systems. Over the two years of Long Shutdown 1, CMS has built
upon the successes of Run 1 to improve the software and computing to meet these
challenges. In this presentation we will describe the new features in software
and computing that will once again put CMS in a position of physics leadership.Comment: Presentation at the DPF 2015 Meeting of the American Physical Society
Division of Particles and Fields, Ann Arbor, Michigan, August 4-8, 201
Top Physics: CDF Results
The top quark plays an important role in the grand scheme of particle
physics, and is also interesting on its own merits. We present recent results
from CDF on top-quark physics based on 100-200/pb of p-pbar collision data. We
have measured the t-tbar cross section in different decay modes using several
different techniques, and are beginning our studies of top-quark properties.
New analyses for this conference include a measurement of the t-tbar cross
section in the lepton-plus-jets channel using a neural net to distinguish
signal and background events, and measurements of top-quark branching
fractions.Comment: Contribution to the proceedings of the XXXIX Rencontres de Moriond on
Electroweak Interactions and Unified Theories; 7 pages, 5 figure
W Boson Cross Section and Decay Properties at the Tevatron
We present the first measurements of sigma(p\bar{p} -> W -> l nu) and
sigma(p\bar{p} -> Z -> l l) at sqrt{s} = 1.96 TeV, along with new measurements
of W angular-decay distributions in p\bar{p} collisions at sqrt{s} = 1.8 TeV.Comment: Submitted to ICHEP 2002 proceeding
CMS Computing: Performance and Outlook
After years of development, the CMS distributed computing system is now in
full operation. The LHC continues to set records for instantaneous luminosity,
and CMS continues to record data at 300 Hz. Because of the intensity of the
beams, there are multiple proton-proton interactions per beam crossing, leading
to larger and larger event sizes and processing times. The CMS computing system
has responded admirably to these challenges. We present the current status of
the system, describe the recent performance, and discuss the challenges ahead
and how we intend to meet them.Comment: Contribution to Proceedings of the DPF-2011 Conference, Providence,
RI, August 8-12, 201
Any Data, Any Time, Anywhere: Global Data Access for Science
Data access is key to science driven by distributed high-throughput computing
(DHTC), an essential technology for many major research projects such as High
Energy Physics (HEP) experiments. However, achieving efficient data access
becomes quite difficult when many independent storage sites are involved
because users are burdened with learning the intricacies of accessing each
system and keeping careful track of data location. We present an alternate
approach: the Any Data, Any Time, Anywhere infrastructure. Combining several
existing software products, AAA presents a global, unified view of storage
systems - a "data federation," a global filesystem for software delivery, and a
workflow management system. We present how one HEP experiment, the Compact Muon
Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance
metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium
on Big Data Computing (BDC) 201
- …
