80 research outputs found

    Manifold Learning for Natural Image Sets, Doctoral Dissertation August 2006

    Get PDF
    The field of manifold learning provides powerful tools for parameterizing high-dimensional data points with a small number of parameters when this data lies on or near some manifold. Images can be thought of as points in some high-dimensional image space where each coordinate represents the intensity value of a single pixel. These manifold learning techniques have been successfully applied to simple image sets, such as handwriting data and a statue in a tightly controlled environment. However, they fail in the case of natural image sets, even those that only vary due to a single degree of freedom, such as a person walking or a heart beating. Parameterizing data sets such as these will allow for additional constraints on traditional computer vision problems such as segmentation and tracking. This dissertation explores the reasons why classical manifold learning algorithms fail on natural image sets and proposes new algorithms for parameterizing this type of data

    Sensors for Vital Signs Monitoring

    Get PDF
    Sensor technology for monitoring vital signs is an important topic for various service applications, such as entertainment and personalization platforms and Internet of Things (IoT) systems, as well as traditional medical purposes, such as disease indication judgments and predictions. Vital signs for monitoring include respiration and heart rates, body temperature, blood pressure, oxygen saturation, electrocardiogram, blood glucose concentration, brain waves, etc. Gait and walking length can also be regarded as vital signs because they can indirectly indicate human activity and status. Sensing technologies include contact sensors such as electrocardiogram (ECG), electroencephalogram (EEG), photoplethysmogram (PPG), non-contact sensors such as ballistocardiography (BCG), and invasive/non-invasive sensors for diagnoses of variations in blood characteristics or body fluids. Radar, vision, and infrared sensors can also be useful technologies for detecting vital signs from the movement of humans or organs. Signal processing, extraction, and analysis techniques are important in industrial applications along with hardware implementation techniques. Battery management and wireless power transmission technologies, the design and optimization of low-power circuits, and systems for continuous monitoring and data collection/transmission should also be considered with sensor technologies. In addition, machine-learning-based diagnostic technology can be used for extracting meaningful information from continuous monitoring data

    The anthropometric, environmental and genetic determinants of right ventricular structure and function

    Get PDF
    BACKGROUND Measures of right ventricular (RV) structure and function have significant prognostic value. The right ventricle is currently assessed by global measures, or point surrogates, which are insensitive to regional and directional changes. We aim to create a high-resolution three-dimensional RV model to improve understanding of its structural and functional determinants. These may be particularly of interest in pulmonary hypertension (PH), a condition in which RV function and outcome are strongly linked. PURPOSE To investigate the feasibility and additional benefit of applying three-dimensional phenotyping and contemporary statistical and genetic approaches to large patient populations. METHODS Healthy subjects and incident PH patients were prospectively recruited. Using a semi-automated atlas-based segmentation algorithm, 3D models characterising RV wall position and displacement were developed, validated and compared with anthropometric, physiological and genetic influences. Statistical techniques were adapted from other high-dimensional approaches to deal with the problems of multiple testing, contiguity, sparsity and computational burden. RESULTS 1527 healthy subjects successfully completed high-resolution 3D CMR and automated segmentation. Of these, 927 subjects underwent next-generation sequencing of the sarcomeric gene titin and 947 subjects completed genotyping of common variants for genome-wide association study. 405 incident PH patients were recruited, of whom 256 completed phenotyping. 3D modelling demonstrated significant reductions in sample size compared to two-dimensional approaches. 3D analysis demonstrated that RV basal-freewall function reflects global functional changes most accurately and that a similar region in PH patients provides stronger survival prediction than all anthropometric, haemodynamic and functional markers. Vascular stiffness, titin truncating variants and common variants may also contribute to changes in RV structure and function. CONCLUSIONS High-resolution phenotyping coupled with computational analysis methods can improve insights into the determinants of RV structure and function in both healthy subjects and PH patients. Large, population-based approaches offer physiological insights relevant to clinical care in selected patient groups.Open Acces

    Translating computational modelling tools for clinical practice in congenital heart disease

    Get PDF
    Increasingly large numbers of medical centres worldwide are equipped with the means to acquire 3D images of patients by utilising magnetic resonance (MR) or computed tomography (CT) scanners. The interpretation of patient 3D image data has significant implications on clinical decision-making and treatment planning. In their raw form, MR and CT images have become critical in routine practice. However, in congenital heart disease (CHD), lesions are often anatomically and physiologically complex. In many cases, 3D imaging alone can fail to provide conclusive information for the clinical team. In the past 20-30 years, several image-derived modelling applications have shown major advancements. Tools such as computational fluid dynamics (CFD) and virtual reality (VR) have successfully demonstrated valuable uses in the management of CHD. However, due to current software limitations, these applications have remained largely isolated to research settings, and have yet to become part of clinical practice. The overall aim of this project was to explore new routes for making conventional computational modelling software more accessible for CHD clinics. The first objective was to create an automatic and fast pipeline for performing vascular CFD simulations. By leveraging machine learning, a solution was built using synthetically generated aortic anatomies, and was seen to be able to predict 3D aortic pressure and velocity flow fields with comparable accuracy to conventional CFD. The second objective was to design a virtual reality (VR) application tailored for supporting the surgical planning and teaching of CHD. The solution was a Unity-based application which included numerous specialised tools, such as mesh-editing features and online networking for group learning. Overall, the outcomes of this ongoing project showed strong indications that the integration of VR and CFD into clinical settings is possible, and has potential for extending 3D imaging and supporting the diagnosis, management and teaching of CHD

    A Decade of Neural Networks: Practical Applications and Prospects

    Get PDF
    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization

    The investigation of hippocampal and hippocampal subfield volumetry, morphology and metabolites using 3T MRI

    Get PDF
    A detailed account of the hippocampal anatomy has been provided. This thesis will explore and exploit the use of 3T MRI and the latest developments in image processing techniques to measure hippocampal and hippocampal subfield volumes, hippocampal metabolites and morphology. In chapter two a protocol for segmenting the hippocampus was created. The protocol was assessed in two groups of subjects with differing socioeconomic status (SES). This was a novel, community based sample in which hippocampal volumes have yet to be assessed in the literature. Manual and automated hippocampal segmentation measurements were compared on the two distinct SES groups. The mean volumes and also the variance in these measurements were comparable between two methods. The Dice overlapping metric comparing the two methods was 0.81. In chapter three voxel based morphometry (VBM) was used to compare local volume differences in grey matter volume between the two SES groups. Two approaches to VBM were compared. DARTEL-VBM results were found to be superior to the earlier ’optimised’ VBM method. Following a small volume correction, DARTEL-VBM results were suggesitive of focal GM volumes reductions in both the right and left hippocampi of the lower SES group. In chapter four an MR spectroscopy protocol was implemented to assess hippocampal metabolites in the two differing SES groups. Interpretable spectra were obtained in 73% of the 42 subjects. The poorer socioeconomic group were considered to have been exposed to chronic stress and therefore via inflammatory processes it was anticipated that the NAA/Cr metabolite ratio would be reduced in this group when compared to the more affluent group. Both NAA/Cr and Cho/Cr hippocampal metabolite ratios were not significantly different between the two groups. The aim of chapter 5 was to implement the protocol and methodology developed in chapter 2 to determine a normal range for hippocampal volumes at 3T MRI. 3D T1-weighted IR-FSPGR images were acquired in 39 healthy, normal volunteers in the age range from 19 to 64. Following the automated procedure hippocampal volumes were manually inspected and edited. The mean and standard deviation of the left and right hippocampal volumes were determined to be: 3421mm3 ± 399mm3 and 3487mm3 ± 431mm3 respectively. After correcting for total ICV the volumes were: 0.22% ± 0.03% and 0.23% ± 0.03% for the left and right hippocampi respectively. Thus, a normative database of hippocampal volumes was established. The normative data here will in future act as a baseline on which other methods of determining hippocampal volumes may be compared. The utility of using the normative dataset to compare other groups of subjects will be limited as a result of the lack of a comprehensive assessment of IQ or education level of the normal volunteers which may affect the volume of the hippocampus. In chapter six Incomplete hippocampal inversion (IHI) was assessed. Few studies have assessed the normal incidence of IHI and of those studies the analysis of IHI extended only to a radiological assessment. Here we present a comprehensive and quantitative assessment of IHI. IHI was found on 31 of the 84 normal subjects assessed (37%). ICV corrected IHI left-sided hippocampal volumes were compared against ICV corrected normal left-sided hippocampal volumes (25 vs. 52 hippocampi). The IHI hippocampal volumes were determined to be smaller than the normal hippocampal volumes (p<< 0.05). However, on further inspection it was observed that the ICV of the IHI was significantly smaller than the ICV of the normal group, confounding the previous result. In chapter seven a pilot study was performed on patients with Rheumatoid Arthritis (RA). The aim was to exploit the improved image quality offered by the 3T MRI to create a protocol for assessing the CA4/ dentate volume and to compare the volume of this subfield of the hippocampus before and after treatment. Two methodologies were implemented. In the first method a protocol was produced to manually segment the CA4/dentate region of the hippocampus from coronal T2-weighted FSE images. Given that few studies have assessed hippocampal subfields, an assessment of study power and sample size was conducted to inform future work. In the second method, the data the DARTEL-VBM image processing pipeline was applied. Statistical nonparametric mapping was applied in the final statistical interpretation of the VBM data. Following an FDR correction, a single GM voxel in the hippocampus was deemed to be statistically significant, this was suggestive of small GM volume increase following antiinflammatory treatment. Finally, in chapter eight, the manual segmentation protocol for the CA4/dentate hippocampal subfield developed in chapter seven was extended to include a complete set of hippocampal subfields. This is one of the first attempts to segment the entire hippocampus into its subfields using 3T MRI and as such, it was important to assess the quality of the measurement procedure. Furthermore, given the subfield volumes and the variability in these measurements, power and sample size calculations were also estimated to inform further work. Seventeen healthy volunteers were scanned using 3T MRI. A detailed manual segmentation protocol was created to guide two independent operators to measure the hippocampal subfield volumes. Repeat measures were made by a single operator for intra-operator variability and inter-operator variability was also assessed. The results of the intra-operator comparison proved reasonably successful where values compared well but were typically slightly poorer than similar attempts in the literature. This was likely to be the result of the additional complication of trying to segment subfields in the head and tail of the hippocampus where previous studies have focused only on the body of the hippocampus. Inter-rater agreement measures for subfield volumes were generally poorer than would be acceptable if full exchangeability of the data between the raters was necessary. This would indicate that further refinements to the manual segmentation protocol are necessary. Future work should seek to improve the methodology to reduce the variability and improve the reproducibility in these measures

    Fast and radiation-free high-resolution MR cranial bone imaging for pediatric patients

    Get PDF
    AbstractEach year, 2.2 million pediatric head computed tomography (CT) scans are performed in the United States. Head trauma and craniosynostosis are two of the most common pediatric conditions requiring head CT scans. Head trauma is common in children and one-third of the patients that present to the emergency room undergoes head CT imaging. Craniosynostosis is a congenital disability defined by a prematurely fused cranial suture. Standard clinical care for pediatric patients with head trauma or craniosynostosis uses high-resolution head CT to identify cranial fractures or cranial sutures. Unfortunately, the ionizing radiation of CT imaging imposes a risk to patients, particularly pediatric patients who are vulnerable to radiation. Moreover, multiple CT scans are often performed during follow-up, exacerbating their cumulative risk. The National Cancer Institute reported that radiation exposure from multiple head CT scans will triple the risk of leukemia and brain cancer. Many medical centers have recently removed CT from the postoperative care of craniosynostosis, limiting postoperative evaluation and highlighting the urgent need for radiation-free imaging. Several “Black bone” magnetic resonance imaging (MRI) methods have been introduced as radiation-free alternatives. Despite the initially encouraging results, these methods have not translated into clinical practice due to several challenges, including 1) subjective manual image processing; 2) long acquisition time. Due to poor signal contrast between bone and its surrounding tissues in MR images, existing post-processing methods rely on extensive manual MR segmentation which is subjective, prone to noise and artifacts, hard to reproduce, and time-consuming. As a result, they do not meet the need for clinical diagnosis and have not been employed clinically. A CT scan takes tens of seconds; however, a high-resolution MR scan takes minutes, which may be challenging for pediatric subject compliance and limit clinical adoption. The overall objective of this study is to develop rapid and radiation-free 3D high-resolution MRI methods to provide CT-equivalent information in diagnosing cranial fractures and cranial suture patency for pediatric patients. Two specific aims are proposed to achieve the overall objective. Aim 1: Develop a fully automated deep learning method to synthesize high-resolution pseudo-CT (pCT) of pediatric cranial bone from MR images. Aim 2: Develop a deep learning image reconstruction method to reduce MR acquisition time. Aim 1 is to address the issues of subjective manual image processing. In this aim, we developed a robust and fully automated deep learning method to create pCT images from MRI, which facilitates translating MR cranial bone imaging into clinical practice for pediatric patients. Two 3D patch-based ResUNets were trained using paired MR and CT patches randomly selected from the whole head (NetWH) or in the vicinity of bone, fractures/sutures, or air (NetBA) to synthesize pCT. A third ResUNet was trained to generate a binary brain mask using only MRI. The pCT images from NetWH (pCTNetWH) in the brain area and NetBA (pCTNetBA) in the non-brain area were combined to generate pCTCom. A manual processing method using inverted MR images (iMR) was also employed for comparison. pCTCom had significantly smaller mean absolute errors (MAE) than pCTNetWH and pCTNetBA in the whole head. Dice Similarity Coefficient (DSC) of the segmented bone was significantly higher in pCTCom than in pCTNetWH, pCTNetBA, and iMR. DSC from pCTCom demonstrated significantly reduced age dependence than iMR. Furthermore, pCTCom provided excellent suture and fracture visibility comparable to CT. A fast MR acquisition is highly desirable to translate novel MR cranial to clinical practice in place of CT. However, fast MR acquisition usually results in under-sampled data below the Nyquist rate, leading to artifacts and high noise. Recently, numerous deep learning MR reconstruction methods have been employed to mitigate artifacts and minimize noise. Despite many successes, existing deep learning methods have not accounted for MR k-space sampling density variations. In aim 2, we developed a self-supervised and physics-guided deep learning method by weighting k-space sampling Density in network training Loss (wkDeLo). The proposed method uses an unrolled network with a data consistency (DC) and a regularization (R). A forward Fourier model was used to transform the reconstructed image into k-space. The data consistency between the transformed k-space and the acquired k-space data is enforced in the DC layer. This unrolled network is regularized by k-space deep-learning prior using a convolution neural network. In total, 400 radial spokes were acquired with an acquisition time of 5 minutes. Two disjoint k-space data sets, including the first 1 minute (80 radial spokes) and the remaining 4 minutes (320 radial spokes), were used as the network training input and target. A unique feature of our proposed method is to use a L1 loss weighted by k-space sampling density in an end-to-end training of the unrolled network. Moreover, we also reconstructed images using the same unrolled network structure but without accounting for the k-space sampling density variations in the loss for comparison. In other words, a uniform weighted k-space is used in the training loss (un-wkDeLo). Furthermore, we implemented a well-accepted deep learning reconstruction method, Self-Supervision via Data Undersampling (SSDU) as a baseline method reference. Using the images reconstructed from a 5-min scan as the gold standard, we computed the structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) for reconstructed images from 1-min k-space data using SSDU, un-wkDeLo, and wkDeLo. The SSIM and PSNR of the wkDeLo images are significantly higher than both SSDU and un-wkDeLo. Moreover, the wkDeLo reconstructed images have the highest sharpness and the least artifacts and noise. In aim 2, we have demonstrated that high quality MR images at a spatial resolution of 0.6x0.6x0.8 mm3 could be achieved using only 1 min acquisition time. Finally, we evaluated the clinical utility of the proposed MR cranial bone imaging in identifying cranial fractures and cranial suture patency. Clinicians by consensus evaluated the MR-derived pCT images. Acceptable image quality was achieved in greater than 90% of all MR scans; diagnoses were 100% accurate in the subset of patients with acceptable image quality. We have demonstrated that the proposed 3D high-resolution MR cranial bone method provided CT-equivalent images for pediatric patients with head trauma or craniosynostosis. This work will have a profound impact on pediatric health by providing clinicians with a rapid diagnostic tool without radiation safety concerns

    Contribuciones de las técnicas machine learning a la cardiología. Predicción de reestenosis tras implante de stent coronario

    Get PDF
    [ES]Antecedentes: Existen pocos temas de actualidad equiparables a la posibilidad de la tecnología actual para desarrollar las mismas capacidades que el ser humano, incluso en medicina. Esta capacidad de simular los procesos de inteligencia humana por parte de máquinas o sistemas informáticos es lo que conocemos hoy en día como inteligencia artificial. Uno de los campos de la inteligencia artificial con mayor aplicación a día de hoy en medicina es el de la predicción, recomendación o diagnóstico, donde se aplican las técnicas machine learning. Asimismo, existe un creciente interés en las técnicas de medicina de precisión, donde las técnicas machine learning pueden ofrecer atención médica individualizada a cada paciente. El intervencionismo coronario percutáneo (ICP) con stent se ha convertido en una práctica habitual en la revascularización de los vasos coronarios con enfermedad aterosclerótica obstructiva significativa. El ICP es asimismo patrón oro de tratamiento en pacientes con infarto agudo de miocardio; reduciendo las tasas de muerte e isquemia recurrente en comparación con el tratamiento médico. El éxito a largo plazo del procedimiento está limitado por la reestenosis del stent, un proceso patológico que provoca un estrechamiento arterial recurrente en el sitio de la ICP. Identificar qué pacientes harán reestenosis es un desafío clínico importante; ya que puede manifestarse como un nuevo infarto agudo de miocardio o forzar una nueva resvascularización del vaso afectado, y que en casos de reestenosis recurrente representa un reto terapéutico. Objetivos: Después de realizar una revisión de las técnicas de inteligencia artificial aplicadas a la medicina y con mayor profundidad, de las técnicas machine learning aplicadas a la cardiología, el objetivo principal de esta tesis doctoral ha sido desarrollar un modelo machine learning para predecir la aparición de reestenosis en pacientes con infarto agudo de miocardio sometidos a ICP con implante de un stent. Asimismo, han sido objetivos secundarios comparar el modelo desarrollado con machine learning con los scores clásicos de riesgo de reestenosis utilizados hasta la fecha; y desarrollar un software que permita trasladar esta contribución a la práctica clínica diaria de forma sencilla. Para desarrollar un modelo fácilmente aplicable, realizamos nuestras predicciones sin variables adicionales a las obtenidas en la práctica rutinaria. Material: El conjunto de datos, obtenido del ensayo GRACIA-3, consistió en 263 pacientes con características demográficas, clínicas y angiográficas; 23 de ellos presentaron reestenosis a los 12 meses después de la implantación del stent. Todos los desarrollos llevados a cabo se han hecho en Python y se ha utilizado computación en la nube, en concreto AWS (Amazon Web Services). Metodología: Se ha utilizado una metodología para trabajar con conjuntos de datos pequeños y no balanceados, siendo importante el esquema de validación cruzada anidada utilizado, así como la utilización de las curvas PR (precision-recall, exhaustividad-sensibilidad), además de las curvas ROC, para la interpretación de los modelos. Se han entrenado los algoritmos más habituales en la literatura para elegir el que mejor comportamiento ha presentado. Resultados: El modelo con mejores resultados ha sido el desarrollado con un clasificador extremely randomized trees; que superó significativamente (0,77; área bajo la curva ROC a los tres scores clínicos clásicos; PRESTO-1 (0,58), PRESTO-2 (0,58) y TLR (0,62). Las curvas exhaustividad sensibilidad ofrecieron una imagen más precisa del rendimiento del modelo extremely randomized trees que muestra un algoritmo eficiente (0,96) para no reestenosis, con alta exhaustividad y alta sensibilidad. Para un umbral considerado óptimo, de 1,000 pacientes sometidos a implante de stent, nuestro modelo machine learning predeciría correctamente 181 (18%) más casos en comparación con el mejor score de riesgo clásico (TLR). Las variables más importantes clasificadas según su contribución a las predicciones fueron diabetes, enfermedad coronaria en 2 ó más vasos, flujo TIMI post-ICP, plaquetas anormales, trombo post-ICP y colesterol anormal. Finalmente, se ha desarrollado una calculadora para trasladar el modelo a la práctica clínica. La calculadora permite estimar el riesgo individual de cada paciente y situarlo en una zona de riesgo, facilitando la toma de decisión al médico en cuanto al seguimiento adecuado para el mismo. Conclusiones: Aplicado inmediatamente después de la implantación del stent, un modelo machine learning diferencia mejor a aquellos pacientes que presentarán o no reestenosis respecto a los discriminadores clásicos actuales
    corecore