72,142 research outputs found
The CMS Computing System: Successes and Challenges
Each LHC experiment will produce datasets with sizes of order one petabyte
per year. All of this data must be stored, processed, transferred, simulated
and analyzed, which requires a computing system of a larger scale than ever
mounted for any particle physics experiment, and possibly for any enterprise in
the world. I discuss how CMS has chosen to address these challenges, focusing
on recent tests of the system that demonstrate the experiment's readiness for
producing physics results with the first LHC data.Comment: To be published in the proceedings of DPF-2009, Detroit, MI, July
2009, eConf C09072
Recommended from our members
Fight for your alienation: The fantasy of employability and the ironic struggle for self-exploitation
This paper draws on Lacanian psychoanalysis, to introduce employability as a cultural fantasy that organizes identity around the desire to shape, exploit and ultimately profit from an employable self. Specifically, the paper shows how individuals seek to overcome their subjective and material alienation by maximizing their self-exploitation through constantly enhancing their employability. This linking of empowerment to selfexploitation has expanded into a broader organizational and political demand calling on individuals to fight for their alienation by having managers and governments help them better exploit themselves through enhancing their employability. Paradoxically, the more contemporary subjects aim to overcome their subjective and material alienation through fantasies of employability the more alienated they become
CMS software and computing for LHC Run 2
The CMS offline software and computing system has successfully met the
challenge of LHC Run 2. In this presentation, we will discuss how the entire
system was improved in anticipation of increased trigger output rate, increased
rate of pileup interactions and the evolution of computing technology. The
primary goals behind these changes was to increase the flexibility of computing
facilities where ever possible, as to increase our operational efficiency, and
to decrease the computing resources needed to accomplish the primary offline
computing workflows. These changes have resulted in a new approach to
distributed computing in CMS for Run 2 and for the future as the LHC luminosity
should continue to increase. We will discuss changes and plans to our data
federation, which was one of the key changes towards a more flexible computing
model for Run 2. Our software framework and algorithms also underwent
significant changes. We will summarize the our experience with a new
multi-threaded framework as deployed on our prompt reconstruction farm for 2015
and across the CMS WLCG Tier-1 facilities. We will discuss our experience with
a analysis data format which is ten times smaller than our primary Run 1
format. This "miniAOD" format has proven to be easier to analyze while be
extremely flexible for analysts. Finally, we describe improvements to our
workflow management system that have resulted in increased automation and
reliability for all facets of CMS production and user analysis operations.Comment: Contribution to proceedings of the 38th International Conference on
High Energy Physics (ICHEP 2016
The Joyce Foundation's Transitional Jobs Reentry Demonstration: Testing Strategies to Help Former Prisoners Find and Keep Jobs and Stay Out of Prison
Each year, almost 700,000 people are released from state prisons, and many struggle to find jobs and integrate successfully into society. This policy brief describes an innovative demonstration of transitional jobs programs for former prisoners in Chicago, Detroit, Milwaukee, and St. Paul being conducted by MDRC
- …