28 research outputs found

    Solving patients with rare diseases through programmatic reanalysis of genome-phenome data.

    Get PDF
    Funder: EC | EC Seventh Framework Programm | FP7 Health (FP7-HEALTH - Specific Programme "Cooperation": Health); doi: https://doi.org/10.13039/100011272; Grant(s): 305444, 305444Funder: Ministerio de Economía y Competitividad (Ministry of Economy and Competitiveness); doi: https://doi.org/10.13039/501100003329Funder: Generalitat de Catalunya (Government of Catalonia); doi: https://doi.org/10.13039/501100002809Funder: EC | European Regional Development Fund (Europski Fond za Regionalni Razvoj); doi: https://doi.org/10.13039/501100008530Funder: Instituto Nacional de Bioinformática ELIXIR Implementation Studies Centro de Excelencia Severo OchoaFunder: EC | EC Seventh Framework Programm | FP7 Health (FP7-HEALTH - Specific Programme "Cooperation": Health)Reanalysis of inconclusive exome/genome sequencing data increases the diagnosis yield of patients with rare diseases. However, the cost and efforts required for reanalysis prevent its routine implementation in research and clinical environments. The Solve-RD project aims to reveal the molecular causes underlying undiagnosed rare diseases. One of the goals is to implement innovative approaches to reanalyse the exomes and genomes from thousands of well-studied undiagnosed cases. The raw genomic data is submitted to Solve-RD through the RD-Connect Genome-Phenome Analysis Platform (GPAP) together with standardised phenotypic and pedigree data. We have developed a programmatic workflow to reanalyse genome-phenome data. It uses the RD-Connect GPAP's Application Programming Interface (API) and relies on the big-data technologies upon which the system is built. We have applied the workflow to prioritise rare known pathogenic variants from 4411 undiagnosed cases. The queries returned an average of 1.45 variants per case, which first were evaluated in bulk by a panel of disease experts and afterwards specifically by the submitter of each case. A total of 120 index cases (21.2% of prioritised cases, 2.7% of all exome/genome-negative samples) have already been solved, with others being under investigation. The implementation of solutions as the one described here provide the technical framework to enable periodic case-level data re-evaluation in clinical settings, as recommended by the American College of Medical Genetics

    Peripheral inflammatory markers contributing to comorbidities in autism

    Get PDF
    This study evaluates the contribution of peripheral biomarkers to comorbidities and clinical findings in autism. Seventeen autistic children and age-matched typically developing (AMTD), between three to nine years old were evaluated. The diagnostic followed the Diagnostic and Statistical Manual of Mental Disorders 4th Edition (DMS-IV) and the Childhood Autism Rating Scale (CARS) was applied to classify the severity. Cytokine profile was evaluated in plasma using a sandwich type ELISA. Paraclinical events included electroencephalography (EEG) record. Statistical analysis was done to explore significant differences in cytokine profile between autism and AMTD groups and respect clinical and paraclinical parameters. Significant differences were found to IL-1 , IL-6, IL-17, IL-12p40, and IL-12p70 cytokines in individuals with autism compared with AMTD (p < 0.05). All autistic patients showed interictalepileptiform activity at EEG, however, only 37.5% suffered epilepsy. There was not a regional focalization of the abnormalities that were detectable with EEG in autistic patients with history of epilepsy. A higher IL-6 level was observed in patients without history of epilepsy with interictalepileptiform activity in the frontal brain region, p < 0.05. In conclusion, peripheral inflammatory markers might be useful as potential biomarkers to predict comorbidities in autism as well as reinforce and aid informed decision-making related to EEG findings in children with Autism spectrum disorders (ASD)

    Peripheral inflammatory markers contributing to comorbidities in autism

    Get PDF
    This study evaluates the contribution of peripheral biomarkers to comorbidities and clinical findings in autism. Seventeen autistic children and age-matched typically developing (AMTD), between three to nine years old were evaluated. The diagnostic followed the Diagnostic and Statistical Manual of Mental Disorders 4th Edition (DMS-IV) and the Childhood Autism Rating Scale (CARS) was applied to classify the severity. Cytokine profile was evaluated in plasma using a sandwich type ELISA. Paraclinical events included electroencephalography (EEG) record. Statistical analysis was done to explore significant differences in cytokine profile between autism and AMTD groups and respect clinical and paraclinical parameters. Significant differences were found to IL-1 , IL-6, IL-17, IL-12p40, and IL-12p70 cytokines in individuals with autism compared with AMTD (p < 0.05). All autistic patients showed interictalepileptiform activity at EEG, however, only 37.5% suffered epilepsy. There was not a regional focalization of the abnormalities that were detectable with EEG in autistic patients with history of epilepsy. A higher IL-6 level was observed in patients without history of epilepsy with interictalepileptiform activity in the frontal brain region, p < 0.05. In conclusion, peripheral inflammatory markers might be useful as potential biomarkers to predict comorbidities in autism as well as reinforce and aid informed decision-making related to EEG findings in children with Autism spectrum disorders (ASD)

    Solving patients with rare diseases through programmatic reanalysis of genome-phenome data

    No full text
    International audienceReanalysis of inconclusive exome/genome sequencing data increases the diagnosis yield of patients with rare diseases. However, the cost and efforts required for reanalysis prevent its routine implementation in research and clinical environments. The Solve-RD project aims to reveal the molecular causes underlying undiagnosed rare diseases. One of the goals is to implement innovative approaches to reanalyse the exomes and genomes from thousands of well-studied undiagnosed cases. The raw genomic data is submitted to Solve-RD through the RD-Connect Genome-Phenome Analysis Platform (GPAP) together with standardised phenotypic and pedigree data. We have developed a programmatic workflow to reanalyse genome-phenome data. It uses the RD-Connect GPAP's Application Programming Interface (API) and relies on the big-data technologies upon which the system is built. We have applied the workflow to prioritise rare known pathogenic variants from 4411 undiagnosed cases. The queries returned an average of 1.45 variants per case, which first were evaluated in bulk by a panel of disease experts and afterwards specifically by the submitter of each case. A total of 120 index cases (21.2% of prioritised cases, 2.7% of all exome/genome-negative samples) have already been solved, with others being under investigation. The implementation of solutions as the one described here provide the technical framework to enable periodic case-level data re-evaluation in clinical settings, as recommended by the American College of Medical Genetics

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    DUNE Offline Computing Conceptual Design Report

    No full text
    International audienceThis document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    The DUNE Far Detector Vertical Drift Technology, Technical Design Report

    No full text
    International audienceDUNE is an international experiment dedicated to addressing some of the questions at the forefront of particle physics and astrophysics, including the mystifying preponderance of matter over antimatter in the early universe. The dual-site experiment will employ an intense neutrino beam focused on a near and a far detector as it aims to determine the neutrino mass hierarchy and to make high-precision measurements of the PMNS matrix parameters, including the CP-violating phase. It will also stand ready to observe supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector implements liquid argon time-projection chamber (LArTPC) technology, and combines the many tens-of-kiloton fiducial mass necessary for rare event searches with the sub-centimeter spatial resolution required to image those events with high precision. The addition of a photon detection system enhances physics capabilities for all DUNE physics drivers and opens prospects for further physics explorations. Given its size, the far detector will be implemented as a set of modules, with LArTPC designs that differ from one another as newer technologies arise. In the vertical drift LArTPC design, a horizontal cathode bisects the detector, creating two stacked drift volumes in which ionization charges drift towards anodes at either the top or bottom. The anodes are composed of perforated PCB layers with conductive strips, enabling reconstruction in 3D. Light-trap-style photon detection modules are placed both on the cryostat's side walls and on the central cathode where they are optically powered. This Technical Design Report describes in detail the technical implementations of each subsystem of this LArTPC that, together with the other far detector modules and the near detector, will enable DUNE to achieve its physics goals

    The DUNE Far Detector Vertical Drift Technology, Technical Design Report

    No full text
    International audienceDUNE is an international experiment dedicated to addressing some of the questions at the forefront of particle physics and astrophysics, including the mystifying preponderance of matter over antimatter in the early universe. The dual-site experiment will employ an intense neutrino beam focused on a near and a far detector as it aims to determine the neutrino mass hierarchy and to make high-precision measurements of the PMNS matrix parameters, including the CP-violating phase. It will also stand ready to observe supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector implements liquid argon time-projection chamber (LArTPC) technology, and combines the many tens-of-kiloton fiducial mass necessary for rare event searches with the sub-centimeter spatial resolution required to image those events with high precision. The addition of a photon detection system enhances physics capabilities for all DUNE physics drivers and opens prospects for further physics explorations. Given its size, the far detector will be implemented as a set of modules, with LArTPC designs that differ from one another as newer technologies arise. In the vertical drift LArTPC design, a horizontal cathode bisects the detector, creating two stacked drift volumes in which ionization charges drift towards anodes at either the top or bottom. The anodes are composed of perforated PCB layers with conductive strips, enabling reconstruction in 3D. Light-trap-style photon detection modules are placed both on the cryostat's side walls and on the central cathode where they are optically powered. This Technical Design Report describes in detail the technical implementations of each subsystem of this LArTPC that, together with the other far detector modules and the near detector, will enable DUNE to achieve its physics goals

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    corecore