5,937 research outputs found

    A new CMS pixel detector for the LHC luminosity upgrade

    Full text link
    The CMS inner pixel detector system is planned to be replaced during the first phase of the LHC luminosity upgrade. The plans foresee an ultra low mass system with four barrel layers and three disks on either end. With the expected increase in particle rates, the electronic readout chain will be changed for fast digital signals. An overview of the envisaged design options for the upgraded CMS pixel detector is given, as well as estimates of the tracking and vertexing performance.Comment: 5 pages, 8 figures, proceedings of 8th International Conference on Radiation Effects on Semiconductor Materials Detectors and Device

    Por que os professores visitam um jardim botânico?

    Get PDF
    Reconhecendo que os jardins botânicos são instituições que desenvolvem programas educativos para diferentes tipos de público, dentre eles o escolar. O presente trabalho analisa os fatores que estimulam os professores a realizar uma visita monitorada com seus alunos nesses espaços. A metodologia foi a aplicação de questionário e a análise das respostas de quarenta professores que visitaram o Jardim Botânico de São Paulo, Brasil. Os resultados mostram a associação da visita ao conteúdo ministrado em aula, e que os aspectos educacionais e conceituais são motivadores para a realização da visita com seus estudantes, na busca de ampliar e trabalhar o conhecimento dos estudantes a cerca de questões ambientais

    Novel deep learning methods for track reconstruction

    Full text link
    For the past year, the HEP.TrkX project has been investigating machine learning solutions to LHC particle track reconstruction problems. A variety of models were studied that drew inspiration from computer vision applications and operated on an image-like representation of tracking detector data. While these approaches have shown some promise, image-based methods face challenges in scaling up to realistic HL-LHC data due to high dimensionality and sparsity. In contrast, models that can operate on the spacepoint representation of track measurements ("hits") can exploit the structure of the data to solve tasks efficiently. In this paper we will show two sets of new deep learning models for reconstructing tracks using space-point data arranged as sequences or connected graphs. In the first set of models, Recurrent Neural Networks (RNNs) are used to extrapolate, build, and evaluate track candidates akin to Kalman Filter algorithms. Such models can express their own uncertainty when trained with an appropriate likelihood loss function. The second set of models use Graph Neural Networks (GNNs) for the tasks of hit classification and segment classification. These models read a graph of connected hits and compute features on the nodes and edges. They adaptively learn which hit connections are important and which are spurious. The models are scaleable with simple architecture and relatively few parameters. Results for all models will be presented on ACTS generic detector simulated data.Comment: CTD 2018 proceeding

    Parallelized and Vectorized Tracking Using Kalman Filters with CMS Detector Geometry and Events

    Full text link
    The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms at the LHC are based on Kalman filter techniques with proven excellent physics performance under a variety of conditions. Starting in 2014, we have been developing Kalman-filter-based methods for track finding and fitting adapted for many-core SIMD processors that are becoming dominant in high-performance systems. This paper summarizes the latest extensions to our software that allow it to run on the realistic CMS-2017 tracker geometry using CMSSW-generated events, including pileup. The reconstructed tracks can be validated against either the CMSSW simulation that generated the hits, or the CMSSW reconstruction of the tracks. In general, the code's computational performance has continued to improve while the above capabilities were being added. We demonstrate that the present Kalman filter implementation is able to reconstruct events with comparable physics performance to CMSSW, while providing generally better computational performance. Further plans for advancing the software are discussed

    994-98 Patients’ Radiation Risk During Diagnostic and Interventional Coronary Procedures

    Get PDF
    Uncertainties in radiation risk estimates at low doses (<0.1Gy) include the shape of the dose-response curve, use of a relative or absolute risk model, and the length of the latent cancer induction period. Coronary procedures are often repeated within short in many patients, but neither absorbed doses nor imparted energies are routinely measured. We used LiF thermoluminescence dosimeters in 15 consecutive diagnostic (D) and 15 PTCA (1) procedures, with stent implantation in 1 case, multivessel PTCA in 2, and PTCA of chronic occlusion in 2. A Philips Optimus 2000 DCI was used, with a standard dose of 10microR/f for an image intensifier format (lIF) of 23cm. Fluoroscopy times (2.9±1min for D and 16±6min for I) number of cine runs (9±2 for D and 17±7 for I) and length of cine runs (5.3±1.5sec for D and 2.9±2sec for I) were representative of our standard procedures. A rate of 12.5f/s was used for cine coronary imaging, with 25f/s for left ventriculograms in 2 projections. IIF 18 and 13cm were used for D and I, respectively. Patient absorbed doses (mGy) were [mean±s.d.(range)]:ThyroidR+L Thorax/2ColumnGonadsD0.6±0.318±27 (1.3–127)21±360.08±0.05I2.0±0.829±50 (1.2–245)26±190.08±0.02Patient radiation exposure during D and I, despite dose-effective technique, is substantial, especially in areas (thorax) which cannot be shielded. It should be routinely measured since radiation risk may not be negligible when repeated procedures are performed. The risk/benefit ratio of repeated D and I must be weighed

    Reconstruction for Liquid Argon TPC Neutrino Detectors Using Parallel Architectures

    Full text link
    Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. In liquid argon time projection chambers (TPCs) the charged particles from neutrino interactions produce ionization electrons which drift in an electric field towards a series of collection wires, and the signal on the wires is used to reconstruct the interaction. The MicroBooNE detector currently collecting data at Fermilab has 8000 wires, and planned future experiments like DUNE will have 100 times more, which means that the time required to reconstruct an event will scale accordingly. Modernization of liquid argon TPC reconstruction code, including vectorization, parallelization and code portability to GPUs, will help to mitigate these challenges. The liquid argon TPC hit finding algorithm within the \texttt{LArSoft}\xspace framework used across multiple experiments has been vectorized and parallelized. This increases the speed of the algorithm on the order of ten times within a standalone version on Intel architectures. This new version has been incorporated back into \texttt{LArSoft}\xspace so that it can be generally used. These methods will also be applied to other low-level reconstruction algorithms of the wire signals such as the deconvolution. The applications and performance of this modernized liquid argon TPC wire reconstruction will be presented

    Graph Neural Networks for Particle Reconstruction in High Energy Physics detectors

    Full text link
    Pattern recognition problems in high energy physics are notably different from traditional machine learning applications in computer vision. Reconstruction algorithms identify and measure the kinematic properties of particles produced in high energy collisions and recorded with complex detector systems. Two critical applications are the reconstruction of charged particle trajectories in tracking detectors and the reconstruction of particle showers in calorimeters. These two problems have unique challenges and characteristics, but both have high dimensionality, high degree of sparsity, and complex geometric layouts. Graph Neural Networks (GNNs) are a relatively new class of deep learning architectures which can deal with such data effectively, allowing scientists to incorporate domain knowledge in a graph structure and learn powerful representations leveraging that structure to identify patterns of interest. In this work we demonstrate the applicability of GNNs to these two diverse particle reconstruction problems.Comment: Presented at NeurIPS 2019 Workshop "Machine Learning and the Physical Sciences

    Graph Neural Networks for Particle Reconstruction in High Energy Physics detectors

    Get PDF
    Pattern recognition problems in high energy physics are notably different from traditional machine learning applications in computer vision. Reconstruction algorithms identify and measure the kinematic properties of particles produced in high energy collisions and recorded with complex detector systems. Two critical applications are the reconstruction of charged particle trajectories in tracking detectors and the reconstruction of particle showers in calorimeters. These two problems have unique challenges and characteristics, but both have high dimensionality, high degree of sparsity, and complex geometric layouts. Graph Neural Networks (GNNs) are a relatively new class of deep learning architectures which can deal with such data effectively, allowing scientists to incorporate domain knowledge in a graph structure and learn powerful representations leveraging that structure to identify patterns of interest. In this work we demonstrate the applicability of GNNs to these two diverse particle reconstruction problems
    • …
    corecore