18 research outputs found

    Methods of visualization and analysis of cardiac depolarization in the three dimensional space

    Get PDF
    The master thesis presents methods for intellectual analysis and visualization 3D EKG in order to increase the efficiency of ECG analysis by extracting additional data. Visualization is presented as part of the signal analysis tasks considered imaging techniques and their mathematical description. Have been developed algorithms for calculating and visualizing the signal attributes are described using mathematical methods and tools for mining signal. The model of patterns searching for comparison purposes of accuracy of methods was constructed, problems of a clustering and classification of data are solved, the program of visualization of data is also developed. This approach gives the largest accuracy in a task of the intellectual analysis that is confirmed in this work. Considered visualization and analysis techniques are also applicable to the multi-dimensional signals of a different kind

    Variability of ventricular repolarization dispersion quantified by time-warping the morphology of the T-Waves

    Get PDF
    Objective: We propose two electrocardiogram (ECG)-derived markers of T-wave morphological variability in the temporal, d¿, and amplitude, da, domains. Two additional markers, d¿NL and daNL, restricted to measure the nonlinear information present within d¿ and da are also proposed. Methods: We evaluated the accuracy of the proposed markers in capturing T-wave time and amplitude variations in 3 situations: 1) In a simulated set up with presence of additive Laplacian noise, 2) when modifying the spatio-temporal distribution of electrical repolarization with an electro-physiological cardiac model, and 3) in ECG records from healthy subjects undergoing a tilt table test. Results:The metrics d¿, da, d¿NL, and daNL followed T-wave time- and amplitude-induced variations under different levels of noise, were strongly associated with changes in the spatio-temporal dispersion of repolarization, and showed to provide additional information to differences in the heart rate, QT and Tpe intervals, and in the T-wave width and amplitude. Conclusion: The proposed ECG-derived markers robustly quantify T-wave morphological variability, being strongly associated with changes in the dispersion of repolarization. Significance: The proposed ECG-derived markers can help to quantify the variability in the dispersion of ventricular repolarization, showing a great potential to be used as arrhythmic risk predictors in clinical situations

    Variability of Ventricular Repolarization Dispersion Quantified by Time-Warping the Morphology of the T-waves

    Get PDF
    We propose two electrocardiogram (ECG)-derived markers of T-wave morphological variability in the temporal, dw , and amplitude, da , domains. Two additional markers, d(NL)w and d(NL)a , restricted to measure the non-linear information present within dw and da are also proposed. METHODS: We evaluated the accuracy of the proposed markers in capturing T-wave time and amplitude variations in 3 situations: (1) In a simulated set up with presence of additive Laplacian noise, (2) when modifying the spatio-temporal distribution of electrical repolarization with an electro-physiological cardiac model and (3) in ECG records from healthy subjects undergoing a tilt table test. RESULTS: The metrics dw , da , d(NL)w and d(NL)a followed T-wave time and amplitude induced variations under different levels of noise, were strongly associated with changes in the spatiotemporal dispersion of repolarization, and showed to provide additional information to differences in the heart rate, QT and Tpe intervals, and in the T-wave width and amplitude. CONCLUSION: The proposed ECG-derived markers robustly quantify T-wave morphological variability, being strongly associated with changes in the dispersion of repolarization. SIGNIFICANCE: The proposed ECG-derived markers can help to quantify the variability in the dispersion of ventricular repolarization, showing a great potential to be used as arrhythmic risk predictors in clinical situations

    Prediction of Cardiac Death Risk by Analysis of Ventricular Repolarization Restitution from the Electrocardiogram Signal

    Get PDF
    Las enfermedades cardiovasculares siguen siendo la mayor causa de muertes en todo el mundo, y se espera que el número de casos crezca progresivamente en los próximos años con el envejecimiento de la población. Por ello, se necesitan marcadores no invasivos con alta capacidad de predicción de muerte para reducir la incidencia de estos eventos fatales.La insuficiencia cardiaca crónica (CHF, del inglés "Chronic Heart Failure") describe la condición por la cual el corazón no es capaz de bombear suficiente sangre para alcanzar las demandas del cuerpo. Se ha demostrado que los pacientes con CHF pueden experimentar un empeoramiento progresivo de los síntomas, pudiendo llegar a producirse la muerte por fallo de bomba (PFD, del inglés "Pump Failure Death"), o sufrireventos arrítmicos malignos que lleven a la muerte súbita cardiaca (SCD, del inglés "Sudden Cardiac Death"). Uno de los factores electro-fisiológicos con mayor influencia en la generación de arritmias malignas es el aumento de la dispersión de la repolarización, o la variación espacio-temporal en los tiempos de repolarización. También se ha demostrado que la respuesta de esta dispersión a variaciones en el ritmo cardiaco, es decir, la dispersión de la restitución de la repolarización, está relacionada con mayor riesgo arrítmico y de SCD. Por otro lado, el empeoramiento de CHF se manifiesta con una reducción de la respuesta de los ventrículos a la estimulación autonómica, y con un balance simpato-vagal anormal. Con la llegada de los defibriladores cardioversores implantables (ICDs, del inglés "Implantable Cardioverter Defibrillators"), y de la terapia de resincronización cardiaca (CRT, del inglés "Cardiac Resynchronization Therapy"), los dos dispositivos más popularmente usados en la práctica clínica para prevenir SCD y PFD, respectivamente, la estratificación de riesgo se ha vuelto muy relevante. Específicamente, ser capaces de predecir el evento potencial que un paciente con CHF podría sufrir (SCD, PFD u otras causas) es de gran importancia. La señal de electrocardiograma (ECG) es un método barato y no invasivo que contiene información importante acerca de la actividad eléctrica del corazón.El objetivo principal de esta tesis es desarrollar marcadores de riesgo derivados del ECG que caractericen la restitución de la repolarización ventricular para mejorar la predicción de SCD y PFD en pacientes con CHF. Para ello, se han utilizado, por un lado, índices basados en intervalos temporales, como los intervalos QT y Tpe, ya que las dinámicas de estos intervalos están asociadas con la restitución de la repolarización, y con su dispersión, respectivamente, y, por el otro lado, índices basados en la morfología de la onda T. Para utilizar la información de la morfología, se ha desarrollado una metodología innovadora que permite la comparación de dos formas diferentes, y la cuantificación de sus diferencias.En el capítulo 2 se desarrolló un algoritmo completamente automático para estimar la pendiente y la curvatura de las dinámicas de los intervalos QT y Tpe a partir de registros ECG Holter de 24 horas de 651 pacientes con CHF. A continuación, se estudió la modulación del patrón circadiano de las estimaciones propuestas, y se evaluó su valor predictivo de SCD y PFD. Finalmente, se estudió la capacidad de clasificación del marcador analizado con mayor valor predictivo, individualmente y en combinación con otros dos marcadores de riesgo de ECG previamente propuestos, que reflejan mecanismos electro-fisiológicos y autonómicos. Los resultados demostraron que la dispersión de la restitución de la repolarización, cuantificada a partir de la pendiente de la dinámica del intervalo Tpe, tiene valor predictivo de SCD y de PFD, con pendientes altas indicativas de sustrato arrímico predisponiendo a SCD y pendientes planas indicativas de fatiga mecánica del corazón predisponiendo a PFD. Sin embargo, la pendiente de la restitución de la repolarización, cuantificada como la pendiente de la relación QT/RR, así como los parámetros de curvatura de las dos relaciones, no mostraron asociación con ningún tipo de muerte cardiaca. El patrón circadiano moduló estos parámetros, con valores significativamente mayores durante el día que durante la noche. Finalmente, los resultados de clasificación probaron que la combinación de los marcadores de riesgo derivados del ECG que reflejan información complementaria mejora la discriminación entre SCD, PFD y otros pacientes. Nuestros resultados sugieren que la pendiente de la dinámica del intervalo Tpe podría incluirse en la práctica clínica como herramienta para estratificar pacientes de acuerdo a su riesgo de sufrir SCD o PFD y, por lo tanto, aumentar el beneficio del tratamiento con ICDs o CRT.Considerando estos resultados, postulamos a continuación que la morfología de la onda T contiene información adicional, no tenida en cuenta al usar únicamente índices basados en intervalos temporales. Por lo tanto, en el capítulo 3 desarrollamos una metodología para comparar la morfología de dos ondas T, y propusimos y evaluamos la capacidad de nuevos marcadores derivados del ECG para cuantificar variaciones en la morfología de la onda T. Primero, comparamos la capacidad de eliminar la variabilidad en el dominio temporal de dos algoritmos, "Dynamic Time Warping" (DTW) y "Square-root Slope Function" (SRSF). Luego, se propusieron índices morfológicos y se evaluó su robustez ante la presencia de ruido aditivo con señales generadas sintéticamente. A continuación, se utilizó un modelo electrofisiológico cardiaco para investigarla relación entre los índices de variabilidad morfológica de onda T y los cambios morfológicos a nivel celular. Finalmente, se cuantificaron las variaciones en la morfología de la onda T producidas por una prueba de tabla basculante en registros de ECG con los marcadores propuestos y se estudió su correlación con el ritmo cardiaco y otros marcadores tradicionales. Nuestros resultados mostraron que SRSF fue capaz de separarlas variaciones en el tiempo y en la amplitud de la onda T. Además, los marcadores propuestos de variabilidad morfológica probaron ser robustos frente a ruido aditivo Laplaciano y demostraron reflejar variaciones en la dispersión de la repolarización a nivel celular en simulación y en registros de ECG reales. En conclusión, los índices propuestos que cuantifican variaciones morfológicas de la onda T han demostrado un gran potential para ser usados como predictores de riesgo arrítmico.En el capítulo 4, se exploró la restitución de la repolarización ventricular usando los índices de variabilidad morfológica presentados en el capítulo 3. Bajo la hipótesis de que la morfología de la onda T refleja la dispersión de la repolarización, hipotetizamos que la restitución de la morfología de la onda T reflejaría la dispersión de la restitución de la repolarización. Por lo tanto, calculamos la pendiente de la restituciónde la morfología de la onda T y evaluamos su valor predictivo de SCD y PFD. También estudiamos, como en el capítulo 2, la modulación del patrón circadiano y la capacidad de clasificación. Los resultados mostraron que la dispersión de la restitución de la repolarización cuantificada a través de la pendiente de la restitución de la morfología de la onda T, estaba asociada específicamente con SCD, sin ninguna relación con PFD. El patrón circadiano también moduló la restitución de la morfología de la onda T, con valores significativamente mayores durante el día que durante la noche. Finalmente, los resultados de clasificación también mejoraron al utilizar una combinación de marcadores de riesgo derivados del ECG. En conclusión, la pendiente de la restitución de la morfología de la onda T podría usarse en la práctica clínica como herramienta para definir una población de alto riesgo de SCD que podría beneficiarse de implantación con ICDs.Finalmente, aunque lo deseable es encontrar un índice individual con alto valor predictivo, los eventos de SCD y PFD son el resultado de una múltiple cadena de mecanismos. Por lo tanto, la predicción podría mejorarse todavía más si se usara un marcador que integrara varios factores de riesgo. En el capítulo 5 se propusieron modelos clínicos, basados en el ECG y otros combinando ambos tipos de variables, para predecir específicamente riesgo de SCD y de PFD. Además, se comparó su valor predictivo. Los modelos clínicos, basados en ECG y combinado demostraron mejorar la predicción de SCD y de PFD, comparado con los marcadores individuales. Para SCD, la combinación de variables clínicas y derivadas del ECG mejoró sustancialmente la predicción de riesgo, comparado con el uso de uno de los dos tipos de variables. Sinembargo, la predicción de riesgo de PFD demostró ser óptima al utilizar el modelo derivado del ECG, ya que la combinación con variables clínicas no añadió ninguna información predictiva de PFD. Nuestros resultados confirman la necesidad de utilizar un índice multi-factorial, que incluya información de mecanismos complementarios, para optimizar la estratificación de riesgo de SCD y de PFD.En conclusión, en esta tesis se han propuesto dos índices derivados del ECG, que reflejan dispersión de la restitución de la repolarización, y se ha demostrado su valor predictivo de SCD y PFD. Cada índice explota información diferente de la onda T, uno utiliza el intervalo Tpe y el otro utiliza la morfología completa de la onda T. Para la cuantificación de las diferencias en la morfología de la onda T, se ha desarrollado una metodología robusta que se basa en la re-parametrización en el tiempo.<br /

    Extraction et débruitage de signaux ECG du foetus.

    Get PDF
    Les malformations cardiaques congénitales sont la première cause de décès liés à une anomalie congénitale. L electrocardiogramme du fœtus (ECGf), qui est censé contenir beaucoup plus d informations par rapport aux méthodes échographiques conventionnelles, peut être mesuré e par des électrodes sur l abdomen de la mère. Cependant, il est tres faible et mélangé avec plusieurs sources de bruit et interférence y compris l ECG de la mère (ECGm) dont le niveau est très fort. Dans les études précédentes, plusieurs méthodes ont été proposées pour l extraction de l ECGf à partir des signaux enregistrés par des électrodes placées à la surface du corps de la mère. Cependant, ces méthodes nécessitent un nombre de capteurs important, et s avèrent inefficaces avec un ou deux capteurs. Dans cette étude trois approches innovantes reposant sur une paramétrisation algébrique, statistique ou par variables d état sont proposées. Ces trois méthodes mettent en œuvre des modélisations différentes de la quasi-périodicité du signal cardiaque. Dans la première approche, le signal cardiaque et sa variabilité sont modélisés par un filtre de Kalman. Dans la seconde approche, le signal est découpé en fenêtres selon les battements, et l empilage constitue un tenseur dont on cherchera la décomposition. Dans la troisième approche, le signal n est pas modélisé directement, mais il est considéré comme un processus Gaussien, caractérisé par ses statistiques à l ordre deux. Dans les différentes modèles, contrairement aux études précédentes, l ECGm et le (ou les) ECGf sont modélisés explicitement. Les performances des méthodes proposées, qui utilisent un nombre minimum de capteurs, sont évaluées sur des données synthétiques et des enregistrements réels, y compris les signaux cardiaques des fœtus jumeaux.Congenital heart defects are the leading cause of birth defect-related deaths. The fetal electrocardiogram (fECG), which is believed to contain much more information as compared with conventional sonographic methods, can be measured by placing electrodes on the mother s abdomen. However, it has very low power and is mixed with several sources of noise and interference, including the strong maternal ECG (mECG). In previous studies, several methods have been proposed for the extraction of fECG signals recorded from the maternal body surface. However, these methods require a large number of sensors, and are ineffective with only one or two sensors. In this study, state modeling, statistical and deterministic approaches are proposed for capturing weak traces of fetal cardiac signals. These three methods implement different models of the quasi-periodicity of the cardiac signal. In the first approach, the heart rate and its variability are modeled by a Kalman filter. In the second approach, the signal is divided into windows according to the beats. Stacking the windows constructs a tensor that is then decomposed. In a third approach, the signal is not directly modeled, but it is considered as a Gaussian process characterized by its second order statistics. In all the different proposed methods, unlike previous studies, mECG and fECG(s) are explicitly modeled. The performances of the proposed methods, which utilize a minimal number of electrodes, are assessed on synthetic data and actual recordings including twin fetal cardiac signals.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Caractérisation des phénomènes physiques par analyse parcimonieuse des signaux transitoires

    Get PDF
    For their uniqueness, transient are really difficult to characterize. They are met everywhere and are generally the result of very complex physical phenomena that contain a lot of information such as the transient at its origin, the effect of the propagation through the medium and the effects induced by the transducers. They can correspond to communication between mammals as well as being the reflection of a fault in electrical or hydraulic networks for instance. Hence their study is of great importance even though it is quite complicated. Numerous signal processing methods have been developed in the last decades: they often rely on statistical approaches, linear projections of the signal onto dictionaries and data-driven techniques. All those methods have pros and cons since they often provide good detections, nevertheless their characterization for classification and discrimination purposes remains complicated. In this spirit, this thesis proposes new approaches to study transients. After a brief overview of the existing methods, this work first focuses on the representation of signals having tight-varying time-frequency components. Generally, general complex-time distributions present a proper framework to study them but remain limited to narrow band signals. In a first part, we propose to overcome this limitation in the case of signals with a spread time-frequency variation. This method is based on the compression of the signal's spectrum to a bandwidth that ensures the efficiency of the technique. A second part then focuses on the extraction of nonlinear modulation phase signals in the context of nonstationary noise and other coherent signals. This is performed with warping operators and compressive sensing reconstruction techniques. The third chapter then focuses on data-driven methods based on the representation of the signal in phase space. The main contribution takes advantage of the lag diversity that enables to highlight time scale transformations as well as amplitude modifications between transients. Hence, we develop different techniques enabling to highlight those properties. Finally, works presented in the first chapters are developed in applicative contexts such as: ECG segmentation, electrical transient characterization, a passive acoustic configuration and the study of acoustic signals in an immerse environment. We then end up by some conclusions and perspectives for future works.Les signaux transitoires, de par leur unicité, sont très difficiles à caractériser. Ils se rencontrent partout et sont généralement le reflet d'un phénomène physique très complexe traduisant de nombreuses informations telles que le signal à l'origine, les effets de la propagation dans le milieu considéré et aussi les effets induits par les capteurs. Ils peuvent aussi bien correspondre à un phénomène de communication entre animaux, qu'être le reflet d'un défaut dans un système électrique ou hydraulique par exemple. Tout ceci rend leur étude très difficile, mais aussi primordiale. De nombreuses techniques en traitement du signal ont été développées ces dernières années pour les étudier: elles reposent souvent sur des approches statistiques, des approches projectives sur différents dictionnaires et des techniques auto-adaptatives. Toutes ces méthodes présentent des avantages et des inconvénients, puisqu'elles permettent souvent de les détecter correctement, néanmoins leur caractérisation à des fins de classification et de discrimination reste compliquée. Cette thèse s'inscrit dans cette optique et propose de nouvelles approches d'étude des transitoires. Après un rapide descriptif des techniques d'étude des signaux transitoires, ce travail s'intéressera dans un premier temps à la représentation des signaux ayant des composantes fréquentielles variant très rapidement. De manière générale l'utilisation des distributions généralisées à temps complexe présente un cadre d'analyse adéquat, mais il est limité aux signaux possédant une bande passante étroite, nous proposons dans une première partie d'étendre cette utilisation à des signaux possédant une bande passante plus large en appliquant un changement d'échelle des signaux. Une deuxième partie s'intéressera davantage à l'extraction de signaux à modulation de phase dans le contexte d'un mélange de bruit non-stationnaire et d'autres signaux cohérents. Ceci sera effectué par des opérateurs de warping couplé à des techniques de débruitage basée sur la compression de données. Le troisième chapitre s'intéressera aux techniques guidées par les données basées sur la représentation des signaux en diagrammes de phase. La contribution principale porte sur la diversité des lags qui permet en effet de mettre en évidence les effets des opérateurs de temps-échelles, mais aussi de modification d'amplitude entre des signaux. Nous développerons donc des méthodes permettant de mettre en évidence ces propriétés. Finalement, les travaux présentés dans les premiers chapitres seront développés dans le cadre de quatre domaines applicatifs qui sont : la segmentation d'ECG, la caractérisation de transitoires électriques, un cas d'acoustique passive et l'étude de signaux acoustiques en milieu immergé. Nous terminerons enfin par une conclusion et quelques perspectives de travail

    Face pose estimation with automatic 3D model creation for a driver inattention monitoring application

    Get PDF
    Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings

    Face pose estimation with automatic 3D model creation for a driver inattention monitoring application

    Get PDF
    Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    3D reconstruction of coronary arteries from angiographic sequences for interventional assistance

    Get PDF
    Introduction -- Review of literature -- Research hypothesis and objectives -- Methodology -- Results and discussion -- Conclusion and future perspectives
    corecore