1,096 research outputs found

    Overview of the HL-LHC Upgrade for the CMS Level-1 Trigger

    Get PDF
    The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the standard model as well as searches for new physics beyond the standard model. Such precision measurements and searches require informationrich datasets with a statistical power that matches the high-luminosity provided by the Phase-2 upgrade of the LHC. Efficiently collecting those datasets will be a challenging task, given the harsh environment of 200 proton-proton interactions per LHC bunch crossing. For this purpose, CMS is designing an efficient data-processing hardware trigger (Level-1) that will include tracking information and high-granularity calorimeter information. Trigger data analysis will be performed through sophisticated algorithms such as particle flow reconstruction, including widespread use of Machine Learning. The current conceptual system design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, lowlatency computing platform for large throughput and sophisticated data correlation across diverse sources

    SuPP & MaPP: Adaptable Structure-Based Representations For Mir Tasks

    Get PDF
    Accurate and flexible representations of music data are paramount to addressing MIR tasks, yet many of the existing approaches are difficult to interpret or rigid in nature. This work introduces two new song representations for structure-based retrieval methods: Surface Pattern Preservation (SuPP), a continuous song representation, and Matrix Pattern Preservation (MaPP), SuPP’s discrete counterpart. These representations come equipped with several user-defined parameters so that they are adaptable for a range of MIR tasks. Experimental results show MaPP as successful in addressing the cover song task on a set of Mazurka scores, with a mean precision of 0.965 and recall of 0.776. SuPP and MaPP also show promise in other MIR applications, such as novel-segment detection and genre classification, the latter of which demonstrates their suitability as inputs for machine learning problems

    Assessing Interprofessional Learning during a Student Placement in an Interprofessional Rehabilitation University Clinic in Primary Healthcare in a Canadian Francophone Minority Context

    Get PDF
    Background: Interprofessional collaboration is deemed the key to quality patient care and the future for healthcare delivery models. Such a complex competency needs to be learned; as such, interprofessional education should be a key component of health professional programs. An Interprofessional Rehabilitation University Clinic was created to promote interprofessional education at the pre-licensure level. However, few resources are currently available to assess interprofessional learning; no tool (English or French) that specifically assesses interprofessional learning could be identified.Methods and Findings: A self-administered questionnaire was developed to assess interprofessional learning during a clinical placement. Using a single-group posttest-only design, this descriptive pilot project reports the results obtained with this tool for the first 15 students on placement at the Clinic. Preliminary findings suggest this tool helped demonstrate that, during placements in an interprofessional clinic, students developed some understanding of their own profession as well as of other professions. Responses showed that participants believe that interprofessional interventions are more efficient, save time, and facilitate sharing of information leading to a better comprehension of the clients’ situations. The tool suggests that students feel that an interprofessional educational experience is beneficial for clients and for themselves.Conclusions: Assessing interprofessional learning is challenging. Although the tool developed during this project is most promising, further research is warranted to increase its usefulness in assessing interprofessional learning

    Optimizing High Throughput Inference on Graph Neural Networks at Shared Computing Facilities with the NVIDIA Triton Inference Server

    Full text link
    With machine learning applications now spanning a variety of computational tasks, multi-user shared computing facilities are devoting a rapidly increasing proportion of their resources to such algorithms. Graph neural networks (GNNs), for example, have provided astounding improvements in extracting complex signatures from data and are now widely used in a variety of applications, such as particle jet classification in high energy physics (HEP). However, GNNs also come with an enormous computational penalty that requires the use of GPUs to maintain reasonable throughput. At shared computing facilities, such as those used by physicists at Fermi National Accelerator Laboratory (Fermilab), methodical resource allocation and high throughput at the many-user scale are key to ensuring that resources are being used as efficiently as possible. These facilities, however, primarily provide CPU-only nodes, which proves detrimental to time-to-insight and computational throughput for workflows that include machine learning inference. In this work, we describe how a shared computing facility can use the NVIDIA Triton Inference Server to optimize its resource allocation and computing structure, recovering high throughput while scaling out to multiple users by massively parallelizing their machine learning inference. To demonstrate the effectiveness of this system in a realistic multi-user environment, we use the Fermilab Elastic Analysis Facility augmented with the Triton Inference Server to provide scalable and high throughput access to a HEP-specific GNN and report on the outcome.Comment: 20 pages, 14 figures, submitted to "Computing and Software for Big Science

    Accounting for student perspectives in task design

    Get PDF
    This chapter aims to provide insights into students’ perspectives about the meanings and purposes of mathematical tasks and to understand how appropriate task design might help minimize any gaps between teacher intentions and student mathematical activity. Throughout the chapter, we explore accounts of how students understand the meaning and purpose of the mathematical activity they undertake, as well as how task design might take account of what we know about these perspectives. For instance, we discuss research that indicates ways in which the perceptions of students may differ from the intentions of teachers and task designers and attempt to articulate the nature of those differences to raise both theoretical and methodological challenges concerning how an observer can appreciate the student’s point of view. We also discuss ways in which task design that takes account of students’ responses might reduce the discrepancies between the intentions of designers and/or teachers and students’ perceptions of their activity and achievements.acceptedVersionThis is a post-peer-review, pre-copyedit version of a chapter published by Springer. The final authenticated version is available online at: https://doi.org/10.1007/978-3-319-09629-2_

    Data Science and Machine Learning in Education

    Full text link
    The growing role of data science (DS) and machine learning (ML) in high-energy physics (HEP) is well established and pertinent given the complex detectors, large data, sets and sophisticated analyses at the heart of HEP research. Moreover, exploiting symmetries inherent in physics data have inspired physics-informed ML as a vibrant sub-field of computer science research. HEP researchers benefit greatly from materials widely available materials for use in education, training and workforce development. They are also contributing to these materials and providing software to DS/ML-related fields. Increasingly, physics departments are offering courses at the intersection of DS, ML and physics, often using curricula developed by HEP researchers and involving open software and data used in HEP. In this white paper, we explore synergies between HEP research and DS/ML education, discuss opportunities and challenges at this intersection, and propose community activities that will be mutually beneficial.Comment: Contribution to Snowmass 202

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements
    corecore