136 research outputs found

    Computer-Aided Diagnosis for Early Identification of Multi-Type Dementia using Deep Neural Networks

    Get PDF
    With millions of people suffering from dementia worldwide, the global prevalence of this condition has a significant impact on the global economy. As well, its prevalence has a negative impact on patients’ lives and their caregivers’ physical and emotional states. Dementia can be developed as a result of some risk factors as well as it has many forms whose signs are sometimes similar. While there is currently no cure for dementia, effective early diagnosis is essential in managing it. Early diagnosis helps people in finding suitable therapies that reduce or even prevent further deterioration of cognitive abilities, and in taking control of their conditions and planning for the future. Furthermore, it also facilitates the research efforts to understand causes and signs of dementia. Early diagnosis is based on the classification of features extracted from three-dimensional brain images. The features have to accurately capture main dementia-related anatomical variations of brain structures, such as hippocampus size, gray and white matter tissues’ volumes, and brain volume. In recent years, numerous researchers have been seeking the development of new or improved Computer-Aided Diagnosis (CAD) technologies to accurately detect dementia. The CAD approaches aim to assist radiologists in increasing the accuracy of the diagnosis and reducing false positives. However, there is a number of limitations and open issues in the state-of-the-art, that need to be addressed. These limitations include that literature to date has focused on differentiating multi-stage of Alzheimer’s disease severity ignoring other dementia types that can be as devastating or even more. Furthermore, the high dimensionality of neuroimages, as well as the complexity of dementia biomarkers, can hinder classification performance. Moreover, the augmentation of neuroimaging analysis with contextual information has received limited attention to-date due to the discrepancies and irregularities of the various forms of data. This work focuses on addressing the need for differentiating between multiple types of dementia in early stages. The objective of this thesis is to automatically discriminate normal controls from patients with various types of dementia in early phases of the disease. This thesis proposes a novel CAD approach, integrating a stacked sparse auto-encoder (SSAE) with a two- dimensional convolutional neural network (CNN) for early identification of multiple types of dementia based on using the discriminant features, extracted from neuroimages, incorporated with the context information. By applying SSAE to intensities extracted from magnetic resonance (MR) neuroimages, SSAE can reduce the high dimensionality of neuroimages and learn changes, exploiting important discrimination features for classification. This research work also proposes to integrate features extracted from MR neuroimages with patients’ contextual information through fusing multi-classifier to enhance the early prediction of various types of dementia. The effectiveness of the proposed method is evaluated on OASIS dataset using five different relevant performance metrics, including accuracy, f1-score, sensitivity, specificity, and precision-recall curve. Across a cohort of 4000 MR neuroimages (176 × 176) as well as the contextual information, and clinical diagnosis of patients serving as the ground truth, the proposed CAD approach was shown to have an improved F-measure of 93% and an average area under Precision-Recall curve of 94%. The proposed method provides a significant improvement in classification output, resulted in high and reproducible accuracy rates of 95% with a sensitivity of 93%, and a specificity of 88%

    Novel Deep Learning Models for Medical Imaging Analysis

    Get PDF
    abstract: Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions

    Full text link
    Medical Image Analysis is currently experiencing a paradigm shift due to Deep Learning. This technology has recently attracted so much interest of the Medical Imaging community that it led to a specialized conference in `Medical Imaging with Deep Learning' in the year 2018. This article surveys the recent developments in this direction, and provides a critical review of the related major aspects. We organize the reviewed literature according to the underlying Pattern Recognition tasks, and further sub-categorize it following a taxonomy based on human anatomy. This article does not assume prior knowledge of Deep Learning and makes a significant contribution in explaining the core Deep Learning concepts to the non-experts in the Medical community. Unique to this study is the Computer Vision/Machine Learning perspective taken on the advances of Deep Learning in Medical Imaging. This enables us to single out `lack of appropriately annotated large-scale datasets' as the core challenge (among other challenges) in this research direction. We draw on the insights from the sister research fields of Computer Vision, Pattern Recognition and Machine Learning etc.; where the techniques of dealing with such challenges have already matured, to provide promising directions for the Medical Imaging community to fully harness Deep Learning in the future

    Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications

    Get PDF
    [Abstract] Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure–Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2014/049Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    Alzheimer Disease Detection Techniques and Methods: A Review

    Get PDF
    Brain pathological changes linked with Alzheimer's disease (AD) can be measured with Neuroimaging. In the past few years, these measures are rapidly integrated into the signatures of Alzheimer disease (AD) with the help of classification frameworks which are offering tools for diagnosis and prognosis. Here is the review study of Alzheimer's disease based on Neuroimaging and cognitive impairment classification. This work is a systematic review for the published work in the field of AD especially the computer-aided diagnosis. The imaging modalities include 1) Magnetic resonance imaging (MRI) 2) Functional MRI (fMRI) 3) Diffusion tensor imaging 4) Positron emission tomography (PET) and 5) amyloid-PET. The study revealed that the classification criterion based on the features shows promising results to diagnose the disease and helps in clinical progression. The most widely used machine learning classifiers for AD diagnosis include Support Vector Machine, Bayesian Classifiers, Linear Discriminant Analysis, and K-Nearest Neighbor along with Deep learning. The study revealed that the deep learning techniques and support vector machine give higher accuracies in the identification of Alzheimer’s disease. The possible challenges along with future directions are also discussed in the paper

    GAN-Based Super-Resolution And Segmentation Of Retinal Layers In Optical Coherence Tomography Scans

    Get PDF
    Optical Coherence Tomography (OCT) has been identified as a noninvasive and cost-effective imaging modality for identifying potential biomarkers for Alzheimer\u27s diagnosis and progress detection. Current hypotheses indicate that retinal layer thickness, which can be assessed via OCT scans, is an efficient biomarker for identifying Alzheimer\u27s disease. Due to factors such as speckle noise, a small target region, and unfavorable imaging conditions manual segmentation of retina layers is a challenging task. Therefore, as a reasonable first step, this study focuses on automatically segmenting retinal layers to separate them for subsequent investigations. Another important challenge commonly faced is the lack of clarity of the layer boundaries in retina OCT scans, which compels the research of super-resolving the images for improved clarity. Deep learning pipelines have stimulated substantial progress for the segmentation tasks. Generative adversarial networks (GANs) are a prominent field of deep learning which achieved astonishing performance in semantic segmentation. Conditional adversarial networks as a general-purpose solution to image-to-image translation problems not only learn the mapping from the input image to the output image but also learn a loss function to train this mapping. We propose a GAN-based segmentation model and evaluate incorporating popular networks, namely, U-Net and ResNet, in the GAN architecture with additional blocks of transposed convolution and sub-pixel convolution for the task of upscaling OCT images from low to high resolution by a factor of four. We also incorporate the Dice loss as an additional reconstruction loss term to improve the performance of this joint optimization task. Our best model configuration empirically achieved the Dice coefficient of 0.867 and mIOU of 0.765

    Multimodal machine learning in medical screenings

    Get PDF
    The healthcare industry, with its high demand and standards, has long been considered a crucial area for technology-based innovation. However, the medical field often relies on experience-based evaluation. Limited resources, overloading capacity, and a lack of accessibility can hinder timely medical care and diagnosis delivery. In light of these challenges, automated medical screening as a decision-making aid is highly recommended. With the increasing availability of data and the need to explore the complementary effect among modalities, multimodal machine learning has emerged as a potential area of technology. Its impact has been witnessed across a wide range of domains, prompting the question of how far machine learning can be leveraged to automate processes in even more complex and high-risk sectors. This paper delves into the realm of multimodal machine learning in the field of automated medical screening and evaluates the potential of this area of study in mental disorder detection, a highly important area of healthcare. First, we conduct a scoping review targeted at high-impact papers to highlight the trends and directions of multimodal machine learning in screening prevalent mental disorders such as depression, stress, and bipolar disorder. The review provides a comprehensive list of popular datasets and extensively studied modalities. The review also proposes an end-to-end pipeline for multimodal machine learning applications, covering essential steps from preprocessing, representation, and fusion, to modelling and evaluation. While cross-modality interaction has been considered a promising factor to leverage fusion among multimodalities, the number of existing multimodal fusion methods employing this mechanism is rather limited. This study investigates multimodal fusion in more detail through the proposal of Autofusion, an autoencoder-infused fusion technique that harnesses the cross-modality interaction among different modalities. The technique is evaluated on DementiaBank’s Pitt corpus to detect Alzheimer’s disease, leveraging the power of cross-modality interaction. Autofusion achieves a promising performance of 79.89% in accuracy, 83.85% in recall, 81.72% in precision, and 82.47% in F1. The technique consistently outperforms all unimodal methods by an average of 5.24% across all metrics. Our method consistently outperforms early fusion and late fusion. Especially against the late fusion hard-voting technique, our method outperforms by an average of 20% across all metrics. Further, empirical results show that the cross-modality interaction term enhances the model performance by 2-3% across metrics. This research highlights the promising impact of cross-modality interaction in multimodal machine learning and calls for further research to unlock its full potential

    Speech and natural language processing for the assessment of customer satisfaction and neuro-degenerative diseases

    Get PDF
    ABSTRACT: Nowadays, the interest in the automatic analysis of speech and text in different scenarios have been increasing. Currently, acoustic analysis is frequently used to extract non-verbal information related to para-linguistic aspects such as articulation and prosody. The linguistic analysis focuses on capturing verbal information from written sources, which can be suitable to evaluate customer satisfaction, or in health-care applications to assess the state of patients under depression or other cognitive states. In the case of call-centers many of the speech recordings collected are related to the opinion of the customers in different industry sectors. Only a small proportion of these calls are evaluated, whereby these processes can be automated using acoustic and linguistic analysis. In the assessment of neuro-degenerative diseases such as Alzheimer's Disease (AD) and Parkinson's Disease (PD), the symptoms are progressive, directly linked to dementia, cognitive decline, and motor impairments. This implies a continuous evaluation of the neurological state since the patients become dependent and need intensive care, showing a decrease of the ability from individual activities of daily life. This thesis proposes methodologies for acoustic and linguistic analyses in different scenarios related to customer satisfaction, cognitive disorders in AD, and depression in PD. The experiments include the evaluation of customer satisfaction, the assessment of genetic AD, linguistic analysis to discriminate PD, depression assessment in PD, and user state modeling based on the arousal-plane for the evaluation of customer satisfaction, AD, and depression in PD. The acoustic features are mainly focused on articulation and prosody analyses, while linguistic features are based on natural language processing techniques. Deep learning approaches based on convolutional and recurrent neural networks are also considered in this thesis
    • …
    corecore