15 research outputs found

    Sistema basado en reconocimiento gestual para determinar la sintomatología de pacientes sordomudos en el proceso de anamnesis

    Get PDF
    Para el desarrollo del proyecto de un sistema basado en reconocimiento gestual para determinar la sintomatología de pacientes sordomudos en el proceso de anamnesis, primeramente, se realizó una fase de análisis que consistió en investigar sobre el proceso de anamnesis con pacientes sordomudo, más específicamente en que cosiste dicho proceso, cuáles son las preguntas más relevantes durante el proceso y cuáles son las respuestas más comunes de un paciente sordomudo a las preguntas que realiza el médico en el proceso de anamnesis. La segunda fase de desarrollo consistió en saber cuáles son los requisitos funcionales que debe integrar la aplicación de reconocimiento gestual sintomatológico, luego diseñar la arquitectura del sistema, que muestra los elementos que estarán presentes en el sistema, después de analizar cuáles fueron las respuestas más comunes que da un paciente sordomudo en la primera fase de análisis, se procede a crear los clips de cada síntoma usan Kinect Studio V2 y luego crear la base de datos utilizando la herramienta Visual Gesture Builder(VGB), para la creación de cada una de las bases de datos ¿Qué Sientes?, ¿Sientes algo más?, ¿Dónde te Duele? y ¿Te duele algo más?, las cuales contienen dos síntomas por cada base de dato. Después se inició el desarrollo de la ampliación, primeramente con el diseño de las vistas (inicio, datos paciente y doctor, módulo doctor y módulo paciente) usando como IDE Visual Studio y para la creación de cada ventana Windows Forms, segundo se procedió con la codificación de cada ventana para darle funciones a los botones, etiquetas, paneles entre otras más, tercero se asoció la librería Microsoft Kinect al proyecto para poder realizar la conexión entre el aplicativo y el dispositivo Kinect V2, cuarto se integraron las bases de datos anteriormente creadas al proyecto y para poder crear la conexión con ellas se asoció la librería Microsoft Kinect Visual Gesture Builder, la cual permite comunicar el aplicativo con cada una de las bases de datos. Quinto se integró al proyecto la librería AdaBoostTech, la cual tiene como función buscar patrones y relacionar lo que el Kinect capta y los síntoma almacenados en las bases de datos en busca de coincidencias, y por último se integraron las ayudas para la interacción con el proceso que son el video introductorio, que muestra una serie de recomendación para realizar el proceso de anamnesis y las ayudas para saber cómo interactuar con cada pregunta que realice el doctor, estas son respuesta pregunta uno, respuesta pregunta dos, respuesta pregunta tres y respuesta pregunta cuatro. La última fase de prueba de reconocimiento gestual consistió en realizar pruebas de distancia del Kinect al paciente y de altura del paciente, es decir, si dependiendo de la distancia del Kinect al paciente y la altura del paciente mejora o empeora el reconocimiento gesto, para estas pruebas se tomó una población de 4 personas que tenían las siguientes alturas 1.2m, 1.46m, 1.5m y 1.63 y se ubicó cada persona a las siguientes distancias del Kinect 1.5m, 2.3m y 3m, después de realizar las pruebas se llegó a la conclusión que el paciente debe ubicarse a 2.3m del Kinect para lograr un buen reconocimiento ya sea con una persona de 1.2m o 1.63m o más.RESUMEN .............................................................................................11ABSTRACT ..............................................................................................131. INTRODUCCIÓN. ...........................................................................152. REVISIÓN DE LITERATURA ..................................................213. MATERIALES Y MÉTODOS...................................................304 RESULTADOS Y DISCUSIONES ........................................485 CONCLUSIONES ..........................................................................506 RECOMENDACIONES .................................................................527 BIBLIOGRAFÍA. ...............................................................................53ANEXOS .....................................................................................................56PregradoIngeniero(a) de Sistema

    36-month clinical outcomes of patients with venous thromboembolism: GARFIELD-VTE

    Get PDF
    Background: Venous thromboembolism (VTE), encompassing both deep vein thrombosis (DVT) and pulmonary embolism (PE), is a leading cause of morbidity and mortality worldwide.Methods: GARFIELD-VTE is a prospective, non-interventional observational study of real-world treatment practices. We aimed to capture the 36-month clinical outcomes of 10,679 patients with objectively confirmed VTE enrolled between May 2014 and January 2017 from 415 sites in 28 countries.Findings: A total of 6582 (61.6 %) patients had DVT alone, 4097 (38.4 %) had PE +/- DVT. At baseline, 98.1 % of patients received anticoagulation (AC) with or without other modalities of therapy. The proportion of patients on AC therapy decreased over time: 87.6 % at 3 months, 73.0 % at 6 months, 54.2 % at 12 months and 42.0 % at 36 months. At 12-months follow-up, the incidences (95 % confidence interval [CI]) of all-cause mortality, recurrent VTE and major bleeding were 6.5 (7.0-8.1), 5.4 (4.9-5.9) and 2.7 (2.4-3.0) per 100 person-years, respectively. At 36-months, these decreased to 4.4 (4.2-4.7), 3.5 (3.2-2.7) and 1.4 (1.3-1.6) per 100 person-years, respectively. Over 36-months, the rate of all-cause mortality and major bleeds were highest in patients treated with parenteral therapy (PAR) versus oral anti-coagulants (OAC) and no OAC, and the rate of recurrent VTE was highest in patients on no OAC versus those on PAR and OAC. The most frequent cause of death after 36-month follow-up was cancer (n = 565, 48.6 %), followed by cardiac (n = 94, 8.1 %), and VTE (n = 38, 3.2 %). Most recurrent VTE events were DVT alone (n = 564, 63.3 %), with the remainder PE, (n = 236, 27.3 %), or PE in combination with DVT (n = 63, 7.3 %).Interpretation: GARFIELD-VTE provides a global perspective of anticoagulation patterns and highlights the accumulation of events within the first 12 months after diagnosis. These findings may help identify treatment gaps for subsequent interventions to improve patient outcomes in this patient population

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Impact of cross-section uncertainties on supernova neutrino spectral parameter fitting in the Deep Underground Neutrino Experiment

    No full text
    International audienceA primary goal of the upcoming Deep Underground Neutrino Experiment (DUNE) is to measure the O(10)  MeV neutrinos produced by a Galactic core-collapse supernova if one should occur during the lifetime of the experiment. The liquid-argon-based detectors planned for DUNE are expected to be uniquely sensitive to the νe component of the supernova flux, enabling a wide variety of physics and astrophysics measurements. A key requirement for a correct interpretation of these measurements is a good understanding of the energy-dependent total cross section σ(Eν) for charged-current νe absorption on argon. In the context of a simulated extraction of supernova νe spectral parameters from a toy analysis, we investigate the impact of σ(Eν) modeling uncertainties on DUNE’s supernova neutrino physics sensitivity for the first time. We find that the currently large theoretical uncertainties on σ(Eν) must be substantially reduced before the νe flux parameters can be extracted reliably; in the absence of external constraints, a measurement of the integrated neutrino luminosity with less than 10% bias with DUNE requires σ(Eν) to be known to about 5%. The neutrino spectral shape parameters can be known to better than 10% for a 20% uncertainty on the cross-section scale, although they will be sensitive to uncertainties on the shape of σ(Eν). A direct measurement of low-energy νe-argon scattering would be invaluable for improving the theoretical precision to the needed level

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    DUNE Offline Computing Conceptual Design Report

    No full text
    International audienceThis document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment
    corecore