24 research outputs found

    Tractography of developing white matter of the internal capsule and corpus callosum in very preterm infants

    Get PDF
    To investigate in preterm infants associations between Diffusion Tensor Imaging (DTI) parameters of the posterior limb of the internal capsule (PLIC) and corpus callosum (CC) and age, white matter (WM) injury and clinical factors. In 84 preterm infants DTI was performed between 40-62 weeks postmenstrual age on 3 T MR. Fractional anisotropy (FA), apparent diffusion coefficient (ADC) values and fibre lengths through the PLIC and the genu and splenium were determined. WM injury was categorised as normal/mildly, moderately and severely abnormal. Associations between DTI parameters and age, WM injury and clinical factors were analysed. A positive association existed between FA and age at imaging for fibres through the PLIC (r = 0.48 p < 0.001) and splenium (r = 0.24 p < 0.01). A negative association existed between ADC and age at imaging for fibres through the PLIC (r = -0.65 p < 0.001), splenium (r = -0.35 p < 0.001) and genu (r = -0.53 p < 0.001). No association was found between DTI parameters and gestational age, degree of WM injury or categorical clinical factors. These results indicate that in our cohort of very preterm infants, at this young age, the development of the PLIC and CC is ongoing and independent of the degree of prematurity or WM injury.Neuro Imaging Researc

    National records of 3000 European bee and hoverfly species: A contribution to pollinator conservation

    Get PDF
    Pollinators play a crucial role in ecosystems globally, ensuring the seed production of most flowering plants. They are threatened by global changes and knowledge of their distribution at the national and continental levels is needed to implement efficient conservation actions, but this knowledge is still fragmented and/or difficult to access. As a step forward, we provide an updated list of around 3000 European bee and hoverfly species, reflecting their current distributional status at the national level (in the form of present, absent, regionally extinct, possibly extinct or non-native). This work was attainable by incorporating both published and unpublished data, as well as knowledge from a large set of taxonomists and ecologists in both groups. After providing the first National species lists for bees and hoverflies for many countries, we examine the current distributional patterns of these species and designate the countries with highest levels of species richness. We also show that many species are recorded in a single European country, highlighting the importance of articulating European and national conservation strategies. Finally, we discuss how the data provided here can be combined with future trait and Red List data to implement research that will further advance pollinator conservation

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    DUNE Offline Computing Conceptual Design Report

    No full text
    International audienceThis document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    corecore