2,674 research outputs found

    Modeling Stroke Diagnosis with the Use of Intelligent Techniques

    Get PDF
    The purpose of this work is to test the efficiency of specific intelligent classification algorithms when dealing with the domain of stroke medical diagnosis. The dataset consists of patient records of the ”Acute Stroke Unit”, Alexandra Hospital, Athens, Greece, describing patients suffering one of 5 different stroke types diagnosed by 127 diagnostic attributes / symptoms collected during the first hours of the emergency stroke situation as well as during the hospitalization and recovery phase of the patients. Prior to the application of the intelligent classifier the dimensionality of the dataset is further reduced using a variety of classic and state of the art dimensionality reductions techniques so as to capture the intrinsic dimensionality of the data. The results obtained indicate that the proposed methodology achieves prediction accuracy levels that are comparable to those obtained by intelligent classifiers trained on the original feature space

    Explainable Anatomical Shape Analysis through Deep Hierarchical Generative Models

    Get PDF
    Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Bayesian Active Learning for Personalization and Uncertainty Quantification in Cardiac Electrophysiological Model

    Get PDF
    Cardiacvascular disease is the top death causing disease worldwide. In recent years, high-fidelity personalized models of the heart have shown an increasing capability to supplement clinical cardiology for improved patient-specific diagnosis, prediction, and treatment planning. In addition, they have shown promise to improve scientific understanding of a variety of disease mechanisms. However, model personalization by estimating the patient-specific tissue properties that are in the form of parameters of a physiological model is challenging. This is because tissue properties, in general, cannot be directly measured and they need to be estimated from measurements that are indirectly related to them through a physiological model. Moreover, these unknown tissue properties are heterogeneous and spatially varying throughout the heart volume presenting a difficulty of high-dimensional (HD) estimation from indirect and limited measurement data. The challenge in model personalization, therefore, summarizes to solving an ill-posed inverse problem where the unknown parameters are HD and the forward model is complex with a non-linear and computationally expensive physiological model. In this dissertation, we address the above challenge with following contributions. First, to address the concern of a complex forward model, we propose the surrogate modeling of the complex target function containing the forward model – an objective function in deterministic estimation or a posterior probability density function in probabilistic estimation – by actively selecting a set of training samples and a Bayesian update of the prior over the target function. The efficient and accurate surrogate of the expensive target function obtained in this manner is then utilized to accelerate either deterministic or probabilistic parameter estimation. Next, within the framework of Bayesian active learning we enable active surrogate learning over a HD parameter space with two novel approaches: 1) a multi-scale optimization that can adaptively allocate higher resolution to heterogeneous tissue regions and lower resolution to homogeneous tissue regions; and 2) a generative model from low-dimensional (LD) latent code to HD tissue properties. Both of these approaches are independently developed and tested within a parameter optimization framework. Furthermore, we devise a novel method that utilizes the surrogate pdf learned on an estimated LD parameter space to improve the proposal distribution of Metropolis Hastings for an accelerated sampling of the exact posterior pdf. We evaluate the presented methods on estimating local tissue excitability of a cardiac electrophysiological model in both synthetic data experiments and real data experiments. Results demonstrate that the presented methods are able to improve the accuracy and efficiency in patient-specific model parameter estimation in comparison to the existing approaches used for model personalization

    Manifold Learning for Natural Image Sets, Doctoral Dissertation August 2006

    Get PDF
    The field of manifold learning provides powerful tools for parameterizing high-dimensional data points with a small number of parameters when this data lies on or near some manifold. Images can be thought of as points in some high-dimensional image space where each coordinate represents the intensity value of a single pixel. These manifold learning techniques have been successfully applied to simple image sets, such as handwriting data and a statue in a tightly controlled environment. However, they fail in the case of natural image sets, even those that only vary due to a single degree of freedom, such as a person walking or a heart beating. Parameterizing data sets such as these will allow for additional constraints on traditional computer vision problems such as segmentation and tracking. This dissertation explores the reasons why classical manifold learning algorithms fail on natural image sets and proposes new algorithms for parameterizing this type of data
    • …
    corecore