568 research outputs found

    Space Point Calibration of the ALICE TPC with Track Residuals

    Get PDF
    In the upcoming LHC Run 3 the upgraded Time Projection Chamber (TPC) of the ALICE experiment will record Pb--Pb collisions in a continuous readout mode at an interaction rate up to 50 kHz. These conditions will lead to the accumulation of space charge in the detector volume which in turn induces distortions of the electron drift lines of several centimeters that fluctuate in time. This work describes the correction of these distortions via a calibration procedure that uses the information of the Inner Tracking System (ITS), which is located inside, and the Transition Radiation Detector (TRD) and the Time-Of-Flight system (TOF), located around the TPC, respectively. The required online tracking algorithm for the TRD, which is based on a Kalman filter, is the main result of this work. The procedure matches extrapolated ITS-TPC tracks to TRD space points utilizing GPUs. The new online tracking algorithm has a performance comparable to the one of the offline tracking algorithm used in the Run 1 and 2 for tracks with transverse momenta above 1.5 GeV/c, while it fulfills the computing speed requirements for Run 3. The second part of this work describes the extraction of time-averaged TPC cluster residuals with respect to interpolated ITS-TRD-TOF tracks, in order to create a map of space-charge distortions. Regular updates of the correction map compensate for changes in the TPC conditions. The map is applied in the final reconstruction of the data

    GPU-based Online Track Reconstruction for the ALICE TPC in Run 3 with Continuous Read-Out

    Full text link
    In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read-out of minimum bias Pb-Pb collisions. The reconstruction strategy of the online-offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration and data compression, and a posterior calibrated asynchronous reconstruction stage. Many new challenges arise, among them continuous TPC read-out, more overlapping collisions, no a priori knowledge of the primary vertex and of location-dependent calibration in the synchronous phase, identification of low-momentum looping tracks, and sophisticated raw data compression. The tracking algorithm for the Time Projection Chamber (TPC) will be based on a Cellular Automaton and the Kalman filter. The reconstruction shall run online, processing 50 times more collisions per second than today, while yielding results comparable to current offline reconstruction. Our TPC track finding leverages the potential of hardware accelerators via the OpenCL and CUDA APIs in a shared source code for CPUs and GPUs for both reconstruction stages. We give an overview of the status of Run 3 tracking including performance on processors and GPUs and achieved compression ratios.Comment: 8 pages, 7 figures, contribution to CHEP 2018 conferenc

    Track Reconstruction in the ALICE TPC using GPUs for LHC Run 3

    Full text link
    In LHC Run 3, ALICE will increase the data taking rate significantly to continuous readout of 50 kHz minimum bias Pb-Pb collisions. The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage. We present a tracking algorithm for the Time Projection Chamber (TPC), the main tracking detector of ALICE. The reconstruction must yield results comparable to current offline reconstruction and meet the time constraints like in the current High Level Trigger (HLT), processing 50 times as many collisions per second as today. It is derived from the current online tracking in the HLT, which is based on a Cellular automaton and the Kalman filter, and we integrate missing features from offline tracking for improved resolution. The continuous TPC readout and overlapping collisions pose new challenges: conversion to spatial coordinates and the application of time- and location dependent calibration must happen in between of track seeding and track fitting while the TPC occupancy increases five-fold. The huge data volume requires a data reduction factor of 20, which imposes additional requirements: the momentum range must be extended to identify low-pt looping tracks and a special refit in uncalibrated coordinates improves the track model entropy encoding. Our TPC track finding leverages the potential of hardware accelerators via the OpenCL and CUDA APIs in a shared source code for CPUs, GPUs, and both reconstruction stages. Porting more reconstruction steps like the remainder of the TPC reconstruction and tracking for other detectors will shift the computing balance from traditional processors to GPUs.Comment: 13 pages, 10 figures, proceedings to Connecting The Dots Workshop, Seattle, 201

    Production of He-4 and (4) in Pb-Pb collisions at root(NN)-N-S=2.76 TeV at the LHC

    Get PDF
    Results on the production of He-4 and (4) nuclei in Pb-Pb collisions at root(NN)-N-S = 2.76 TeV in the rapidity range vertical bar y vertical bar <1, using the ALICE detector, are presented in this paper. The rapidity densities corresponding to 0-10% central events are found to be dN/dy4(He) = (0.8 +/- 0.4 (stat) +/- 0.3 (syst)) x 10(-6) and dN/dy4 = (1.1 +/- 0.4 (stat) +/- 0.2 (syst)) x 10(-6), respectively. This is in agreement with the statistical thermal model expectation assuming the same chemical freeze-out temperature (T-chem = 156 MeV) as for light hadrons. The measured ratio of (4)/He-4 is 1.4 +/- 0.8 (stat) +/- 0.5 (syst). (C) 2018 Published by Elsevier B.V.Peer reviewe

    Measuring universal health coverage based on an index of effective coverage of health services in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background Achieving universal health coverage (UHC) involves all people receiving the health services they need, of high quality, without experiencing financial hardship. Making progress towards UHC is a policy priority for both countries and global institutions, as highlighted by the agenda of the UN Sustainable Development Goals (SDGs) and WHO's Thirteenth General Programme of Work (GPW13). Measuring effective coverage at the health-system level is important for understanding whether health services are aligned with countries' health profiles and are of sufficient quality to produce health gains for populations of all ages. Methods Based on the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019, we assessed UHC effective coverage for 204 countries and territories from 1990 to 2019. Drawing from a measurement framework developed through WHO's GPW13 consultation, we mapped 23 effective coverage indicators to a matrix representing health service types (eg, promotion, prevention, and treatment) and five population-age groups spanning from reproductive and newborn to older adults (≥65 years). Effective coverage indicators were based on intervention coverage or outcome-based measures such as mortality-to-incidence ratios to approximate access to quality care; outcome-based measures were transformed to values on a scale of 0–100 based on the 2·5th and 97·5th percentile of location-year values. We constructed the UHC effective coverage index by weighting each effective coverage indicator relative to its associated potential health gains, as measured by disability-adjusted life-years for each location-year and population-age group. For three tests of validity (content, known-groups, and convergent), UHC effective coverage index performance was generally better than that of other UHC service coverage indices from WHO (ie, the current metric for SDG indicator 3.8.1 on UHC service coverage), the World Bank, and GBD 2017. We quantified frontiers of UHC effective coverage performance on the basis of pooled health spending per capita, representing UHC effective coverage index levels achieved in 2019 relative to country-level government health spending, prepaid private expenditures, and development assistance for health. To assess current trajectories towards the GPW13 UHC billion target—1 billion more people benefiting from UHC by 2023—we estimated additional population equivalents with UHC effective coverage from 2018 to 2023. Findings Globally, performance on the UHC effective coverage index improved from 45·8 (95% uncertainty interval 44·2–47·5) in 1990 to 60·3 (58·7–61·9) in 2019, yet country-level UHC effective coverage in 2019 still spanned from 95 or higher in Japan and Iceland to lower than 25 in Somalia and the Central African Republic. Since 2010, sub-Saharan Africa showed accelerated gains on the UHC effective coverage index (at an average increase of 2·6% [1·9–3·3] per year up to 2019); by contrast, most other GBD super-regions had slowed rates of progress in 2010–2019 relative to 1990–2010. Many countries showed lagging performance on effective coverage indicators for non-communicable diseases relative to those for communicable diseases and maternal and child health, despite non-communicable diseases accounting for a greater proportion of potential health gains in 2019, suggesting that many health systems are not keeping pace with the rising non-communicable disease burden and associated population health needs. In 2019, the UHC effective coverage index was associated with pooled health spending per capita (r=0·79), although countries across the development spectrum had much lower UHC effective coverage than is potentially achievable relative to their health spending. Under maximum efficiency of translating health spending into UHC effective coverage performance, countries would need to reach 1398pooledhealthspendingpercapita(US1398 pooled health spending per capita (US adjusted for purchasing power parity) in order to achieve 80 on the UHC effective coverage index. From 2018 to 2023, an estimated 388·9 million (358·6–421·3) more population equivalents would have UHC effective coverage, falling well short of the GPW13 target of 1 billion more people benefiting from UHC during this time. Current projections point to an estimated 3·1 billion (3·0–3·2) population equivalents still lacking UHC effective coverage in 2023, with nearly a third (968·1 million [903·5–1040·3]) residing in south Asia. Interpretation The present study demonstrates the utility of measuring effective coverage and its role in supporting improved health outcomes for all people—the ultimate goal of UHC and its achievement. Global ambitions to accelerate progress on UHC service coverage are increasingly unlikely unless concerted action on non-communicable diseases occurs and countries can better translate health spending into improved performance. Focusing on effective coverage and accounting for the world's evolving health needs lays the groundwork for better understanding how close—or how far—all populations are in benefiting from UHC

    Gpu-Based Online Track Reconstruction for the Alice Tpc in Run 3 With Continuous Read-Out

    Get PDF
    In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read-out of minimum bias Pb—Pb collisions. The reconstruction strategy of the online-offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration and data compression, and a posterior calibrated asynchronous reconstruction stage. Many new challenges arise, among them continuous TPC read-out, more overlapping collisions, no a priori knowledge of the primary vertex and of location-dependent calibration in the synchronous phase, identification of low-momentum looping tracks, and sophisticated raw data compression. The tracking algorithm for the Time Projection Chamber (TPC) will be based on a Cellular Automaton and the Kalman filter. The reconstruction shall run online, processing 50 times more collisions per second than today, while yielding results comparable to current offline reconstruction. Our TPC track finding leverages the potential of hardware accelerators via the OpenCL and CUDA APIs in a shared source code for CPUs and GPUs for both reconstruction stages. We give an overview of the status of Run 3 tracking including performance on processors and GPUs and achieved compression ratios

    Track Reconstruction in the ALICE TPC using GPUs for LHC Run 3

    No full text
    In LHC Run 3, ALICE will increase the data taking rate significantly to continuous readout of 50 kHz minimum bias Pb-Pb collisions. The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage. We present a tracking algorithm for the Time Projection Chamber (TPC), the main tracking detector of ALICE. The reconstruction must yield results comparable to current offline reconstruction and meet the time constraints like in the current High Level Trigger (HLT), processing 50 times as many collisions per second as today. It is derived from the current online tracking in the HLT, which is based on a Cellular automaton and the Kalman filter, and we integrate missing features from offline tracking for improved resolution. The continuous TPC readout and overlapping collisions pose new challenges: conversion to spatial coordinates and the application of time- and location dependent calibration must happen in between of track seeding and track fitting while the TPC occupancy increases five-fold. The huge data volume requires a data reduction factor of 20, which imposes additional requirements: the momentum range must be extended to identify low-p_{\rm{T}t looping tracks and a special refit in uncalibrated coordinates improves the track model entropy encoding. Our TPC track finding leverages the potential of hardware accelerators via the OpenCL and CUDA APIs in a shared source code for CPUs, GPUs, and both reconstruction stages. Porting more reconstruction steps like the remainder of the TPC reconstruction and tracking for other detectors will shift the computing balance from traditional processors to GPUs. We give an overview of the foreseen tracking in Run 3 and discuss the track finding efficiency, resolution, treatment of continuous readout data, and performance on processors and GPUs

    Gpu-Based Online Track Reconstruction for the Alice Tpc in Run 3 With Continuous Read-Out

    No full text
    In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read-out of minimum bias Pb—Pb collisions. The reconstruction strategy of the online-offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration and data compression, and a posterior calibrated asynchronous reconstruction stage. Many new challenges arise, among them continuous TPC read-out, more overlapping collisions, no a priori knowledge of the primary vertex and of location-dependent calibration in the synchronous phase, identification of low-momentum looping tracks, and sophisticated raw data compression. The tracking algorithm for the Time Projection Chamber (TPC) will be based on a Cellular Automaton and the Kalman filter. The reconstruction shall run online, processing 50 times more collisions per second than today, while yielding results comparable to current offline reconstruction. Our TPC track finding leverages the potential of hardware accelerators via the OpenCL and CUDA APIs in a shared source code for CPUs and GPUs for both reconstruction stages. We give an overview of the status of Run 3 tracking including performance on processors and GPUs and achieved compression ratios

    Track Reconstruction in the ALICE TPC using GPUs for LHC Run 3

    No full text
    In LHC Run 3, ALICE will increase the data taking rate significantly to continuous readout of 50 kHz minimum bias Pb-Pb collisions. The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage. We present a tracking algorithm for the Time Projection Chamber (TPC), the main tracking detector of ALICE. The reconstruction must yield results comparable to current offline reconstruction and meet the time constraints like in the current High Level Trigger (HLT), processing 50 times as many collisions per second as today. It is derived from the current online tracking in the HLT, which is based on a Cellular automaton and the Kalman filter, and we integrate missing features from offline tracking for improved resolution. The continuous TPC readout and overlapping collisions pose new challenges: conversion to spatial coordinates and the application of time- and location dependent calibration must happen in between of track seeding and track fitting while the TPC occupancy increases five-fold. The huge data volume requires a data reduction factor of 20, which imposes additional requirements: the momentum range must be extended to identify low-p_{\rm{T}t looping tracks and a special refit in uncalibrated coordinates improves the track model entropy encoding. Our TPC track finding leverages the potential of hardware accelerators via the OpenCL and CUDA APIs in a shared source code for CPUs, GPUs, and both reconstruction stages. Porting more reconstruction steps like the remainder of the TPC reconstruction and tracking for other detectors will shift the computing balance from traditional processors to GPUs. We give an overview of the foreseen tracking in Run 3 and discuss the track finding efficiency, resolution, treatment of continuous readout data, and performance on processors and GPUs
    corecore