1,096 research outputs found
Overview of the HL-LHC Upgrade for the CMS Level-1 Trigger
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the standard model as well as searches for new physics beyond the standard model. Such precision measurements and searches require informationrich datasets with a statistical power that matches the high-luminosity provided by the Phase-2 upgrade of the LHC. Efficiently collecting those datasets will be a challenging task, given the harsh environment of 200 proton-proton interactions per LHC bunch crossing. For this purpose, CMS is designing an efficient data-processing hardware trigger (Level-1) that will include tracking information and high-granularity calorimeter information. Trigger data analysis will be performed through sophisticated algorithms such as particle flow reconstruction, including widespread use of Machine Learning. The current conceptual system design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, lowlatency computing platform for large throughput and sophisticated data correlation across diverse sources
Recommended from our members
Emerging Jets Search, Triton Server Deployment, and Track Quality Development: Machine Learning Applications in High Energy Physics
Machine learning is becoming prevalent in high energy physics, with numerous applicationsin physics analyses and event reconstruction showing great improvements compared to traditional computing methods. This thesis studies three projects which each propose new avenues for machine learning applications within the high energy physics CMS experiment located at CERN. In the first project, a search for a dark matter signal called “emerging jets” is performed, using graph neural networks to greatly increase sensitivity to the signal’s signature within the data. The result of this dark matter search sets the most stringent exclusion limits to date on theoretical emerging jet models. Motivated by inefficiencies encountered when processing the emerging jet graph neural network at Fermi National Accelerator Laboratory’s computing centers, the second project re-optimizes the computing centers for machine learning inference. This re-optimization uses NVIDIA Triton Inference Servers to process users’ analysis code heterogeneously, therefore achieving high processing throughput and decreasing user time-to-insight. The last project focuses on an upgrade to the CMS experiment’s real-time event selection system which improves physics object reconstruction under harsh processing conditions. A boosted decision tree is used to quickly and efficiently quantify a reconstructed particle’s “track quality” in order to remove particle tracks reconstructed erroneously. In summary, this thesis will not only present examples of how high energy physics can greatly benefit by leveraging machine learning techniques for physics analysis and reconstruction, but will also provide guidance on how the field can prepare for the inevitable increase in machine learning applications.</p
SuPP & MaPP: Adaptable Structure-Based Representations For Mir Tasks
Accurate and flexible representations of music data are paramount to addressing MIR tasks, yet many of the existing approaches are difficult to interpret or rigid in nature. This work introduces two new song representations for structure-based retrieval methods: Surface Pattern Preservation (SuPP), a continuous song representation, and Matrix Pattern Preservation (MaPP), SuPP’s discrete counterpart. These representations come equipped with several user-defined parameters so that they are adaptable for a range of MIR tasks. Experimental results show MaPP as successful in addressing the cover song task on a set of Mazurka scores, with a mean precision of 0.965 and recall of 0.776. SuPP and MaPP also show promise in other MIR applications, such as novel-segment detection and genre classification, the latter of which demonstrates their suitability as inputs for machine learning problems
Assessing Interprofessional Learning during a Student Placement in an Interprofessional Rehabilitation University Clinic in Primary Healthcare in a Canadian Francophone Minority Context
Background: Interprofessional collaboration is deemed the key to quality patient care and the future for healthcare delivery models. Such a complex competency needs to be learned; as such, interprofessional education should be a key component of health professional programs. An Interprofessional Rehabilitation University Clinic was created to promote interprofessional education at the pre-licensure level. However, few resources are currently available to assess interprofessional learning; no tool (English or French) that specifically assesses interprofessional learning could be identified.Methods and Findings: A self-administered questionnaire was developed to assess interprofessional learning during a clinical placement. Using a single-group posttest-only design, this descriptive pilot project reports the results obtained with this tool for the first 15 students on placement at the Clinic. Preliminary findings suggest this tool helped demonstrate that, during placements in an interprofessional clinic, students developed some understanding of their own profession as well as of other professions. Responses showed that participants believe that interprofessional interventions are more efficient, save time, and facilitate sharing of information leading to a better comprehension of the clients’ situations. The tool suggests that students feel that an interprofessional educational experience is beneficial for clients and for themselves.Conclusions: Assessing interprofessional learning is challenging. Although the tool developed during this project is most promising, further research is warranted to increase its usefulness in assessing interprofessional learning
Optimizing High Throughput Inference on Graph Neural Networks at Shared Computing Facilities with the NVIDIA Triton Inference Server
With machine learning applications now spanning a variety of computational
tasks, multi-user shared computing facilities are devoting a rapidly increasing
proportion of their resources to such algorithms. Graph neural networks (GNNs),
for example, have provided astounding improvements in extracting complex
signatures from data and are now widely used in a variety of applications, such
as particle jet classification in high energy physics (HEP). However, GNNs also
come with an enormous computational penalty that requires the use of GPUs to
maintain reasonable throughput. At shared computing facilities, such as those
used by physicists at Fermi National Accelerator Laboratory (Fermilab),
methodical resource allocation and high throughput at the many-user scale are
key to ensuring that resources are being used as efficiently as possible. These
facilities, however, primarily provide CPU-only nodes, which proves detrimental
to time-to-insight and computational throughput for workflows that include
machine learning inference. In this work, we describe how a shared computing
facility can use the NVIDIA Triton Inference Server to optimize its resource
allocation and computing structure, recovering high throughput while scaling
out to multiple users by massively parallelizing their machine learning
inference. To demonstrate the effectiveness of this system in a realistic
multi-user environment, we use the Fermilab Elastic Analysis Facility augmented
with the Triton Inference Server to provide scalable and high throughput access
to a HEP-specific GNN and report on the outcome.Comment: 20 pages, 14 figures, submitted to "Computing and Software for Big
Science
Accounting for student perspectives in task design
This chapter aims to provide insights into students’ perspectives about the meanings and purposes of mathematical tasks and to understand how appropriate task design might help minimize any gaps between teacher intentions and student mathematical activity. Throughout the chapter, we explore accounts of how students understand the meaning and purpose of the mathematical activity they undertake, as well as how task design might take account of what we know about these perspectives. For instance, we discuss research that indicates ways in which the perceptions of students may differ from the intentions of teachers and task designers and attempt to articulate the nature of those differences to raise both theoretical and methodological challenges concerning how an observer can appreciate the student’s point of view. We also discuss ways in which task design that takes account of students’ responses might reduce the discrepancies between the intentions of designers and/or teachers and students’ perceptions of their activity and achievements.acceptedVersionThis is a post-peer-review, pre-copyedit version of a chapter published by Springer. The final authenticated version is available online at: https://doi.org/10.1007/978-3-319-09629-2_
Recommended from our members
Optimizing High-Throughput Inference on Graph Neural Networks at Shared Computing Facilities with the NVIDIA Triton Inference Server
With machine learning applications now spanning a variety of computational tasks, multi-user shared computing facilities are devoting a rapidly increasing proportion of their resources to such algorithms. Graph neural networks (GNNs), for example, have provided astounding improvements in extracting complex signatures from data and are now widely used in a variety of applications, such as particle jet classification in high energy physics (HEP). However, GNNs also come with an enormous computational penalty that requires the use of GPUs to maintain reasonable throughput. At shared computing facilities, such as those used by physicists at Fermi National Accelerator Laboratory (Fermilab), methodical resource allocation and high throughput at the many-user scale are key to ensuring that resources are being used as efficiently as possible. These facilities, however, primarily provide CPU-only nodes, which proves detrimental to time-to-insight and computational throughput for workflows that include machine learning inference. In this work, we describe how a shared computing facility can use the NVIDIA Triton Inference Server to optimize its resource allocation and computing structure, recovering high throughput while scaling out to multiple users by massively parallelizing their machine learning inference. To demonstrate the effectiveness of this system in a realistic multi-user environment, we use the Fermilab Elastic Analysis Facility augmented with the Triton Inference Server to provide scalable and high-throughput access to a HEP-specific GNN and report on the outcome.
</p
Data Science and Machine Learning in Education
The growing role of data science (DS) and machine learning (ML) in
high-energy physics (HEP) is well established and pertinent given the complex
detectors, large data, sets and sophisticated analyses at the heart of HEP
research. Moreover, exploiting symmetries inherent in physics data have
inspired physics-informed ML as a vibrant sub-field of computer science
research. HEP researchers benefit greatly from materials widely available
materials for use in education, training and workforce development. They are
also contributing to these materials and providing software to DS/ML-related
fields. Increasingly, physics departments are offering courses at the
intersection of DS, ML and physics, often using curricula developed by HEP
researchers and involving open software and data used in HEP. In this white
paper, we explore synergies between HEP research and DS/ML education, discuss
opportunities and challenges at this intersection, and propose community
activities that will be mutually beneficial.Comment: Contribution to Snowmass 202
Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector
A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements
- …