36 research outputs found

    Body composition and lung cancer-associated cachexia in TRACERx

    No full text
    Cancer-associated cachexia (CAC) is a major contributor to morbidity and mortality in individuals with non-small cell lung cancer. Key features of CAC include alterations in body composition and body weight. Here, we explore the association between body composition and body weight with survival and delineate potential biological processes and mediators that contribute to the development of CAC. Computed tomography-based body composition analysis of 651 individuals in the TRACERx (TRAcking non-small cell lung Cancer Evolution through therapy (Rx)) study suggested that individuals in the bottom 20th percentile of the distribution of skeletal muscle or adipose tissue area at the time of lung cancer diagnosis, had significantly shorter lung cancer-specific survival and overall survival. This finding was validated in 420 individuals in the independent Boston Lung Cancer Study. Individuals classified as having developed CAC according to one or more features at relapse encompassing loss of adipose or muscle tissue, or body mass index-adjusted weight loss were found to have distinct tumor genomic and transcriptomic profiles compared with individuals who did not develop such features. Primary non-small cell lung cancers from individuals who developed CAC were characterized by enrichment of inflammatory signaling and epithelial–mesenchymal transitional pathways, and differentially expressed genes upregulated in these tumors included cancer-testis antigen MAGEA6 and matrix metalloproteinases, such as ADAMTS3. In an exploratory proteomic analysis of circulating putative mediators of cachexia performed in a subset of 110 individuals from TRACERx, a significant association between circulating GDF15 and loss of body weight, skeletal muscle and adipose tissue was identified at relapse, supporting the potential therapeutic relevance of targeting GDF15 in the management of CAC

    Report from Working Group 3: Beyond the Standard Model physics at the HL-LHC and HE-LHC

    No full text
    This is the third out of five chapters of the final report [1] of the Workshop on Physics at HL-LHC, and perspectives on HE-LHC [2]. It is devoted to the study of the potential, in the search for Beyond the Standard Model (BSM) physics, of the High Luminosity (HL) phase of the LHC, defined as 33 ab1^{-1} of data taken at a centre-of-mass energy of 14 TeV, and of a possible future upgrade, the High Energy (HE) LHC, defined as 1515 ab1^{-1} of data at a centre-of-mass energy of 27 TeV. We consider a large variety of new physics models, both in a simplified model fashion and in a more model-dependent one. A long list of contributions from the theory and experimental (ATLAS, CMS, LHCb) communities have been collected and merged together to give a complete, wide, and consistent view of future prospects for BSM physics at the considered colliders. On top of the usual standard candles, such as supersymmetric simplified models and resonances, considered for the evaluation of future collider potentials, this report contains results on dark matter and dark sectors, long lived particles, leptoquarks, sterile neutrinos, axion-like particles, heavy scalars, vector-like quarks, and more. Particular attention is placed, especially in the study of the HL-LHC prospects, to the detector upgrades, the assessment of the future systematic uncertainties, and new experimental techniques. The general conclusion is that the HL-LHC, on top of allowing to extend the present LHC mass and coupling reach by 2050%20-50\% on most new physics scenarios, will also be able to constrain, and potentially discover, new physics that is presently unconstrained. Moreover, compared to the HL-LHC, the reach in most observables will, generally more than double at the HE-LHC, which may represent a good candidate future facility for a final test of TeV-scale new physics

    Report from Working Group 3 : Beyond the Standard Model Physics at the HL-LHC and HE-LHC

    No full text
    CERN Yellow Reports: Monographs, vol 7 (2019)Contribution to: HL/HE-LHC WorkshopThis is the third out of five chapters of the final report [1] of the Workshop on Physics at HL-LHC, and perspectives on HE-LHC [2]. It is devoted to the study of the potential, in the search for Beyond the Standard Model (BSM) physics, of the High Luminosity (HL) phase of the LHC, defined as 33 ab1^{-1} of data taken at a centre-of-mass energy of 14 TeV, and of a possible future upgrade, the High Energy (HE) LHC, defined as 1515 ab1^{-1} of data at a centre-of-mass energy of 27 TeV. We consider a large variety of new physics models, both in a simplified model fashion and in a more model-dependent one. A long list of contributions from the theory and experimental (ATLAS, CMS, LHCb) communities have been collected and merged together to give a complete, wide, and consistent view of future prospects for BSM physics at the considered colliders. On top of the usual standard candles, such as supersymmetric simplified models and resonances, considered for the evaluation of future collider potentials, this report contains results on dark matter and dark sectors, long lived particles, leptoquarks, sterile neutrinos, axion-like particles, heavy scalars, vector-like quarks, and more. Particular attention is placed, especially in the study of the HL-LHC prospects, to the detector upgrades, the assessment of the future systematic uncertainties, and new experimental techniques. The general conclusion is that the HL-LHC, on top of allowing to extend the present LHC mass and coupling reach by 2050%20-50\% on most new physics scenarios, will also be able to constrain, and potentially discover, new physics that is presently unconstrained. Moreover, compared to the HL-LHC, the reach in most observables will, generally more than double at the HE-LHC, which may represent a good candidate future facility for a final test of TeV-scale new physics

    Deep Underground Neutrino Experiment (DUNE) Near Detector Conceptual Design Report

    No full text
    International audienceThe Deep Underground Neutrino Experiment (DUNE) is an international, world-class experiment aimed at exploring fundamental questions about the universe that are at the forefront of astrophysics and particle physics research. DUNE will study questions pertaining to the preponderance of matter over antimatter in the early universe, the dynamics of supernovae, the subtleties of neutrino interaction physics, and a number of beyond the Standard Model topics accessible in a powerful neutrino beam. A critical component of the DUNE physics program involves the study of changes in a powerful beam of neutrinos, i.e., neutrino oscillations, as the neutrinos propagate a long distance. The experiment consists of a near detector, sited close to the source of the beam, and a far detector, sited along the beam at a large distance. This document, the DUNE Near Detector Conceptual Design Report (CDR), describes the design of the DUNE near detector and the science program that drives the design and technology choices. The goals and requirements underlying the design, along with projected performance are given. It serves as a starting point for a more detailed design that will be described in future documents

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation

    Separation of track- and shower-like energy deposits in ProtoDUNE-SP using a convolutional neural network

    No full text
    International audienceLiquid argon time projection chamber detector technology provides high spatial and calorimetric resolutions on the charged particles traversing liquid argon. As a result, the technology has been used in a number of recent neutrino experiments, and is the technology of choice for the Deep Underground Neutrino Experiment (DUNE). In order to perform high precision measurements of neutrinos in the detector, final state particles need to be effectively identified, and their energy accurately reconstructed. This article proposes an algorithm based on a convolutional neural network to perform the classification of energy deposits and reconstructed particles as track-like or arising from electromagnetic cascades. Results from testing the algorithm on experimental data from ProtoDUNE-SP, a prototype of the DUNE far detector, are presented. The network identifies track- and shower-like particles, as well as Michel electrons, with high efficiency. The performance of the algorithm is consistent between experimental data and simulation

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment
    corecore