15 research outputs found

    Síndrome de Down: bases genéticas e principais alterações: Down syndrome: genetic bases and main changes

    Get PDF
    A Síndrome de Down foi relatada inicialmente pelo médico John Langdon em meados de 1866 mediante semelhanças físicas observadas em crianças que apresentavam atraso mental no qual eram designadas como situação de “mongolismo” para que fosse possível a definição do conjunto de manifestações que estavam sendo observadas. Compreende-se que o entendimento das bases fisiopatológicas da SD é de suma importância tendo em vista que tal conhecimento será capaz de proporcionar novas terapêuticas farmacológicas e não farmacológicas específicas para os indivíduos com a SD. Por certo, associado a fatores genéticos, os fatores epigenéticos, como interações celulares, também são responsáveis pela estruturação do fenótipo de pacientes com SD permitindo, portanto, a verificação de diferentes alterações como complicações congênitas e também acometimento audível, visual e cardíaco

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Evaluation of a quality improvement intervention to reduce anastomotic leak following right colectomy (EAGLE): pragmatic, batched stepped-wedge, cluster-randomized trial in 64 countries

    Get PDF
    Background Anastomotic leak affects 8 per cent of patients after right colectomy with a 10-fold increased risk of postoperative death. The EAGLE study aimed to develop and test whether an international, standardized quality improvement intervention could reduce anastomotic leaks. Methods The internationally intended protocol, iteratively co-developed by a multistage Delphi process, comprised an online educational module introducing risk stratification, an intraoperative checklist, and harmonized surgical techniques. Clusters (hospital teams) were randomized to one of three arms with varied sequences of intervention/data collection by a derived stepped-wedge batch design (at least 18 hospital teams per batch). Patients were blinded to the study allocation. Low- and middle-income country enrolment was encouraged. The primary outcome (assessed by intention to treat) was anastomotic leak rate, and subgroup analyses by module completion (at least 80 per cent of surgeons, high engagement; less than 50 per cent, low engagement) were preplanned. Results A total 355 hospital teams registered, with 332 from 64 countries (39.2 per cent low and middle income) included in the final analysis. The online modules were completed by half of the surgeons (2143 of 4411). The primary analysis included 3039 of the 3268 patients recruited (206 patients had no anastomosis and 23 were lost to follow-up), with anastomotic leaks arising before and after the intervention in 10.1 and 9.6 per cent respectively (adjusted OR 0.87, 95 per cent c.i. 0.59 to 1.30; P = 0.498). The proportion of surgeons completing the educational modules was an influence: the leak rate decreased from 12.2 per cent (61 of 500) before intervention to 5.1 per cent (24 of 473) after intervention in high-engagement centres (adjusted OR 0.36, 0.20 to 0.64; P < 0.001), but this was not observed in low-engagement hospitals (8.3 per cent (59 of 714) and 13.8 per cent (61 of 443) respectively; adjusted OR 2.09, 1.31 to 3.31). Conclusion Completion of globally available digital training by engaged teams can alter anastomotic leak rates. Registration number: NCT04270721 (http://www.clinicaltrials.gov)

    Delayed colorectal cancer care during covid-19 pandemic (decor-19). Global perspective from an international survey

    No full text
    Background The widespread nature of coronavirus disease 2019 (COVID-19) has been unprecedented. We sought to analyze its global impact with a survey on colorectal cancer (CRC) care during the pandemic. Methods The impact of COVID-19 on preoperative assessment, elective surgery, and postoperative management of CRC patients was explored by a 35-item survey, which was distributed worldwide to members of surgical societies with an interest in CRC care. Respondents were divided into two comparator groups: 1) ‘delay’ group: CRC care affected by the pandemic; 2) ‘no delay’ group: unaltered CRC practice. Results A total of 1,051 respondents from 84 countries completed the survey. No substantial differences in demographics were found between the ‘delay’ (745, 70.9%) and ‘no delay’ (306, 29.1%) groups. Suspension of multidisciplinary team meetings, staff members quarantined or relocated to COVID-19 units, units fully dedicated to COVID-19 care, personal protective equipment not readily available were factors significantly associated to delays in endoscopy, radiology, surgery, histopathology and prolonged chemoradiation therapy-to-surgery intervals. In the ‘delay’ group, 48.9% of respondents reported a change in the initial surgical plan and 26.3% reported a shift from elective to urgent operations. Recovery of CRC care was associated with the status of the outbreak. Practicing in COVID-free units, no change in operative slots and staff members not relocated to COVID-19 units were statistically associated with unaltered CRC care in the ‘no delay’ group, while the geographical distribution was not. Conclusions Global changes in diagnostic and therapeutic CRC practices were evident. Changes were associated with differences in health-care delivery systems, hospital’s preparedness, resources availability, and local COVID-19 prevalence rather than geographical factors. Strategic planning is required to optimize CRC care

    Search for Scalar Diphoton Resonances in the Mass Range 6560065-600 GeV with the ATLAS Detector in pppp Collision Data at s\sqrt{s} = 8 TeVTeV

    No full text
    A search for scalar particles decaying via narrow resonances into two photons in the mass range 65–600 GeV is performed using 20.3fb120.3\text{}\text{}{\mathrm{fb}}^{-1} of s=8TeV\sqrt{s}=8\text{}\text{}\mathrm{TeV} pppp collision data collected with the ATLAS detector at the Large Hadron Collider. The recently discovered Higgs boson is treated as a background. No significant evidence for an additional signal is observed. The results are presented as limits at the 95% confidence level on the production cross section of a scalar boson times branching ratio into two photons, in a fiducial volume where the reconstruction efficiency is approximately independent of the event topology. The upper limits set extend over a considerably wider mass range than previous searches

    Measurements of the Total and Differential Higgs Boson Production Cross Sections Combining the H??????? and H???ZZ*???4??? Decay Channels at s\sqrt{s}=8??????TeV with the ATLAS Detector

    No full text
    Measurements of the total and differential cross sections of Higgs boson production are performed using 20.3~fb1^{-1} of pppp collisions produced by the Large Hadron Collider at a center-of-mass energy of s=8\sqrt{s} = 8 TeV and recorded by the ATLAS detector. Cross sections are obtained from measured HγγH \rightarrow \gamma \gamma and HZZ4H \rightarrow ZZ ^{*}\rightarrow 4\ell event yields, which are combined accounting for detector efficiencies, fiducial acceptances and branching fractions. Differential cross sections are reported as a function of Higgs boson transverse momentum, Higgs boson rapidity, number of jets in the event, and transverse momentum of the leading jet. The total production cross section is determined to be σppH=33.0±5.3(stat)±1.6(sys)pb\sigma_{pp \to H} = 33.0 \pm 5.3 \, ({\rm stat}) \pm 1.6 \, ({\rm sys}) \mathrm{pb}. The measurements are compared to state-of-the-art predictions.Measurements of the total and differential cross sections of Higgs boson production are performed using 20.3  fb-1 of pp collisions produced by the Large Hadron Collider at a center-of-mass energy of s=8  TeV and recorded by the ATLAS detector. Cross sections are obtained from measured H→γγ and H→ZZ*→4ℓ event yields, which are combined accounting for detector efficiencies, fiducial acceptances, and branching fractions. Differential cross sections are reported as a function of Higgs boson transverse momentum, Higgs boson rapidity, number of jets in the event, and transverse momentum of the leading jet. The total production cross section is determined to be σpp→H=33.0±5.3 (stat)±1.6 (syst)  pb. The measurements are compared to state-of-the-art predictions.Measurements of the total and differential cross sections of Higgs boson production are performed using 20.3 fb1^{-1} of pppp collisions produced by the Large Hadron Collider at a center-of-mass energy of s=8\sqrt{s} = 8 TeV and recorded by the ATLAS detector. Cross sections are obtained from measured HγγH \rightarrow \gamma \gamma and HZZ4H \rightarrow ZZ ^{*}\rightarrow 4\ell event yields, which are combined accounting for detector efficiencies, fiducial acceptances and branching fractions. Differential cross sections are reported as a function of Higgs boson transverse momentum, Higgs boson rapidity, number of jets in the event, and transverse momentum of the leading jet. The total production cross section is determined to be σppH=33.0±5.3(stat)±1.6(sys)pb\sigma_{pp \to H} = 33.0 \pm 5.3 \, ({\rm stat}) \pm 1.6 \, ({\rm sys}) \mathrm{pb}. The measurements are compared to state-of-the-art predictions

    Search for Higgs and ZZ Boson Decays to J/ψγJ/\psi\gamma and Υ(nS)γ\Upsilon(nS)\gamma with the ATLAS Detector

    No full text
    A search for the decays of the Higgs and ZZ bosons to J/ψγJ/\psi\gamma and Υ(nS)γ\Upsilon(nS)\gamma (n=1,2,3n=1,2,3) is performed with pppp collision data samples corresponding to integrated luminosities of up to 20.3fb120.3\mathrm{fb}^{-1} collected at s=8TeV\sqrt{s}=8\mathrm{TeV} with the ATLAS detector at the CERN Large Hadron Collider. No significant excess of events is observed above expected backgrounds and 95% CL upper limits are placed on the branching fractions. In the J/ψγJ/\psi\gamma final state the limits are 1.5×1031.5\times10^{-3} and 2.6×1062.6\times10^{-6} for the Higgs and ZZ bosons, respectively, while in the Υ(1S,2S,3S)γ\Upsilon(1S,2S,3S)\,\gamma final states the limits are (1.3,1.9,1.3)×103(1.3,1.9,1.3)\times10^{-3} and (3.4,6.5,5.4)×106(3.4,6.5,5.4)\times10^{-6}, respectively
    corecore