2,340 research outputs found
An outlook of the user support model to educate the users community at the CMS Experiment
The CMS (Compact Muon Solenoid) experiment is one of the two large
general-purpose particle physics detectors built at the LHC (Large Hadron
Collider) at CERN in Geneva, Switzerland. The diverse collaboration combined
with a highly distributed computing environment and Petabytes/year of data
being collected makes CMS unlike any other High Energy Physics collaborations
before. This presents new challenges to educate and bring users, coming from
different cultural, linguistics and social backgrounds, up to speed to
contribute to the physics analysis. CMS has been able to deal with this new
paradigm by deploying a user support structure model that uses collaborative
tools to educate about software, computing an physics tools specific to CMS. To
carry out the user support mission worldwide, an LHC Physics Centre (LPC) was
created few years back at Fermilab as a hub for US physicists. The LPC serves
as a "brick and mortar" location for physics excellence for the CMS physicists
where graduate and postgraduate scientists can find experts in all aspects of
data analysis and learn via tutorials, workshops, conferences and gatherings.
Following the huge success of LPC, a centre at CERN itself called LHC Physics
Centre at CERN (LPCC) and Terascale Analysis Centre at DESY have been created
with similar goals. The CMS user support model would also facilitate in making
the non-CMS scientific community learn about CMS physics. A good example of
this is the effort by HEP experiments, including CMS, to focus on data
preservation efforts. In order to facilitate its use by the future scientific
community, who may want to re-visit our data, and re-analyze it, CMS is
evaluating the resources required. A detailed, good quality and well-maintained
documentation by the user support group about the CMS computing and software
may go a long way to help in this endeavour.Comment: 9 pages, 2 figure
Improving collaborative documentation in CMS
Complete and up-to-date documentation is essential for efficient data analysis in a large and complex collaboration like CMS. Good documentation reduces the time spent in problem solving for users and software developers. The scientists in our research environment do not necessarily have the interests or skills of professional technical writers. This results in inconsistencies in the documentation. To improve the quality, we have started a multidisciplinary project involving CMS user support and expertise in technical communication from the University of Turku, Finland. In this paper, we present possible approaches to study the usability of the documentation, for instance, usability tests conducted recently for the CMS software and computing user documentatio
In Pursuit of Authenticity – CMS Open Data in Education
Publisher Copyright: © Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).There are some universally acknowledged problems in school sciences. In developed countries worldwide, young people are not interested in studying STEM subjects. Whether that is because of perceived lack of personal relevance, disconnect from the actual fields of study, “sanitized” school practices or other factors is a matter of debate, but it is eminently clear that, as educators, we must do our best to combat this trend. In this paper, we discuss how open data from the CMS experiment has been used in education and present feedback from Finnish teachers who have received training in using these freely available programming resources to bring modern physics into their teaching. The main focus here is on the teachers' perception of authenticity in the use of “real world” research data, although there is an additional benefit of learning general scientific methods and cross-disciplinary data handling skills as well.Peer reviewe
Using CMS Open Data in research – challenges and directions
The CMS experiment at CERN has released research-quality data from particle collisions at the LHC since 2014. Almost all data from the first LHC run in 2010–2012 with the corresponding simulated samples are now in the public domain, and several scientific studies have been performed using these data. This paper summarizes the available data and tools, reviews the challenges in using them in research, and discusses measures to improve their usability.Peer reviewe
Reinterpretation of LHC results for new physics : status and recommendations after run 2
We report on the status of efforts to improve the reinterpretation of searches and measurements at the LHC in terms of models for new physics, in the context of the LHC Reinterpretation Forum. We detail current experimental offerings in direct searches for new particles, measurements, technical implementations and Open Data, and provide a set of recommendations for further improving the presentation of LHC results in order to better enable reinterpretation in the future. We also provide a brief description of existing software reinterpretation frameworks and recent global analyses of new physics that make use of the current data.Peer reviewe
Preparations for the public release of high-level CMS data
Volume: 273The CMS Collaboration, in accordance with its commitment to open access and data preservation, is preparing for the public release of up to half of the reconstructed collision data collected in 2010. Efforts at present are focused on the usability of the data in education. The data will be accompanied by example applications tailored for different levels of access, including ready-to-use web-based applications for histogramming or visualising individual collision events and a virtual machine image of the CMS software environment that is compatible with these data. The virtual machine image will contain instructions for using the data with the online applications as well as examples of simple analyses. The novelty of this initiative is two-fold: in terms of open science, it lies in releasing the data in a format that is good for analysis; from an outreach perspective, it is to provide the possibility for people outside CMS to build educational applications using our public data. CMS will rely on services for data preservation and open access being prototyped at CERN with input from CMS and the other LHC experiments.Peer reviewe
Open data provenance and reproducibility : a case study from publishing CMS open data
In this paper we present the latest CMS open data release published on the CERN Oopen Data portal. Samples of collision and simulated datasets were released together with detailed information about the data provenance. The associated data production chains cover the necessary computing environments, the configuration files and the computational procedures used in each data production step. We describe data curation techniques used to obtain and publish the data provenance information and we study the possibility of reproducing parts of the released data using the publicly available information. The present work demonstrates the usefulness of releasing selected samples of raw and primary data in order to fully ensure the completeness of information about the data production chain for the attention of general data scientists and other non-specialists interested in using particle physics data for education or research purposes.In this paper we present the latest CMS open data release published on the CERN Oopen Data portal. Samples of collision and simulated datasets were released together with detailed information about the data provenance. The associated data production chains cover the necessary computing environments, the configuration files and the computational procedures used in each data production step. We describe data curation techniques used to obtain and publish the data provenance information and we study the possibility of reproducing parts of the released data using the publicly available information. The present work demonstrates the usefulness of releasing selected samples of raw and primary data in order to fully ensure the completeness of information about the data production chain for the attention of general data scientists and other non-specialists interested in using particle physics data for education or research purposes.Peer reviewe
Recommended from our members
Open is not enough
The solutions adopted by the high-energy physics community to foster reproducible research are examples of best practices that could be embraced more widely. This first experience suggests that reproducibility requires going beyond openness
Recommended from our members
Open is not enough
The solutions adopted by the high-energy physics community to foster reproducible research are examples of best practices that could be embraced more widely. This first experience suggests that reproducibility requires going beyond openness.Peer reviewe
- …