27 research outputs found

    Pathological Brain Detection Using Weiner Filtering, 2D-Discrete Wavelet Transform, Probabilistic PCA, and Random Subspace Ensemble Classifier

    Get PDF
    Accurate diagnosis of pathological brain images is important for patient care, particularly in the early phase of the disease. Although numerous studies have used machine-learning techniques for the computer-aided diagnosis (CAD) of pathological brain, previous methods encountered challenges in terms of the diagnostic efficiency owing to deficiencies in the choice of proper filtering techniques, neuroimaging biomarkers, and limited learning models. Magnetic resonance imaging (MRI) is capable of providing enhanced information regarding the soft tissues, and therefore MR images are included in the proposed approach. In this study, we propose a new model that includes Wiener filtering for noise reduction, 2D-discrete wavelet transform (2D-DWT) for feature extraction, probabilistic principal component analysis (PPCA) for dimensionality reduction, and a random subspace ensemble (RSE) classifier along with the K-nearest neighbors (KNN) algorithm as a base classifier to classify brain images as pathological or normal ones. The proposed methods provide a significant improvement in classification results when compared to other studies. Based on 5Ă—5 cross-validation (CV), the proposed method outperforms 21 state-of-the-art algorithms in terms of classification accuracy, sensitivity, and specificity for all four datasets used in the study

    Efficient Algorithm for Distinction Mild Cognitive Impairment from Alzheimer’s Disease Based on Specific View FCM White Matter Segmentation and Ensemble Learning

    Get PDF
    Purpose: Alzheimer's Disease (AD) is in the dementia group and is one of the most prevalent neurodegenerative disorders. Between existing characteristics, White Matter (WM) is a known marker for AD tracking, and WM segmentation in MRI based on clustering can be used to decrease the volume of data. Many algorithms have been developed to predict AD, but most concentrate on the distinction of AD from Cognitive Normal (CN). In this study, we provided a new, simple, and efficient methodology for classifying patients into AD and MCI patients and evaluated the effect of the view dimension of Fuzzy C Means (FCM) in prediction with ensemble classifiers. Materials and Methods: We proposed our methodology in three steps; first, segmentation of WM from T1 MRI with FCM according to two specific viewpoints (3D and 2D). In the second, two groups of features are extracted: approximate coefficients of Discrete Wavelet Transform (DWT) and statistical (mean, variance, skewness) features. In the final step, an ensemble classifier that is constructed with three classifiers, K-Nearest Neighbor (KNN), Decision Tree (DT), and Linear Discriminant Analysis (LDA), was used. Results: The proposed method has been evaluated by using 1280 slices (samples) from 64 patients with MCI (32) and AD (32) of the ADNI dataset. The best performance is for the 3D viewpoint, and the accuracy, precision, and f1-score achieved from the methodology are 94.22%, 94.45%, and 94.21%, respectively, by using a ten-fold Cross-Validation (CV) strategy. Conclusion: The experimental evaluation shows that WM segmentation increases the performance of the ensemble classifier, and moreover the 3D view FCM is better than the 2D view. According to the results, the proposed methodology has comparable performance for the detection of MCI from AD. The low computational cost algorithm and the three classifiers for generalization can be used in practical application by physicians in pre-clinical

    Banknote Authentication and Medical Image Diagnosis Using Feature Descriptors and Deep Learning Methods

    Get PDF
    Banknote recognition and medical image analysis have been the foci of image processing and pattern recognition research. As counterfeiters have taken advantage of the innovation in print media technologies for reproducing fake monies, hence the need to design systems which can reassure and protect citizens of the authenticity of banknotes in circulation. Similarly, many physicians must interpret medical images. But image analysis by humans is susceptible to error due to wide variations across interpreters, lethargy, and human subjectivity. Computer-aided diagnosis is vital to improvements in medical analysis, as they facilitate the identification of findings that need treatment and assist the expert’s workflow. Thus, this thesis is organized around three such problems related to Banknote Authentication and Medical Image Diagnosis. In our first research problem, we proposed a new banknote recognition approach that classifies the principal components of extracted HOG features. We further experimented on computing HOG descriptors from cells created from image patch vertices of SURF points and designed a feature reduction approach based on a high correlation and low variance filter. In our second research problem, we developed a mobile app for banknote identification and counterfeit detection using the Unity 3D software and evaluated its performance based on a Cascaded Ensemble approach. The algorithm was then extended to a client-server architecture using SIFT and SURF features reduced by Bag of Words and high correlation-based HOG vectors. In our third research problem, experiments were conducted on a pre-trained mobile app for medical image diagnosis using three convolutional layers with an Ensemble Classifier comprising PCA and bagging of five base learners. Also, we implemented a Bidirectional Generative Adversarial Network to mitigate the effect of the Binary Cross Entropy loss based on a Deep Convolutional Generative Adversarial Network as the generator and encoder with Capsule Network as the discriminator while experimenting on images with random composition and translation inferences. Lastly, we proposed a variant of the Single Image Super-resolution for medical analysis by redesigning the Super Resolution Generative Adversarial Network to increase the Peak Signal to Noise Ratio during image reconstruction by incorporating a loss function based on the mean square error of pixel space and Super Resolution Convolutional Neural Network layers

    Landmark Localization, Feature Matching and Biomarker Discovery from Magnetic Resonance Images

    Get PDF
    The work presented in this thesis proposes several methods that can be roughly divided into three different categories: I) landmark localization in medical images, II) feature matching for image registration, and III) biomarker discovery in neuroimaging. The first part deals with the identification of anatomical landmarks. The motivation stems from the fact that the manual identification and labeling of these landmarks is very time consuming and prone to observer errors, especially when large datasets must be analyzed. In this thesis we present three methods to tackle this challenge: A landmark descriptor based on local self-similarities (SS), a subspace building framework based on manifold learning and a sparse coding landmark descriptor based on data-specific learned dictionary basis. The second part of this thesis deals with finding matching features between a pair of images. These matches can be used to perform a registration between them. Registration is a powerful tool that allows mapping images in a common space in order to aid in their analysis. Accurate registration can be challenging to achieve using intensity based registration algorithms. Here, a framework is proposed for learning correspondences in pairs of images by matching SS features and random sample and consensus (RANSAC) is employed as a robust model estimator to learn a deformation model based on feature matches. Finally, the third part of the thesis deals with biomarker discovery using machine learning. In this section a framework for feature extraction from learned low-dimensional subspaces that represent inter-subject variability is proposed. The manifold subspace is built using data-driven regions of interest (ROI). These regions are learned via sparse regression, with stability selection. Also, probabilistic distribution models for different stages in the disease trajectory are estimated for different class populations in the low-dimensional manifold and used to construct a probabilistic scoring function.Open Acces

    Computational Intelligence in Healthcare

    Get PDF
    This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Computational Intelligence in Healthcare

    Get PDF
    The number of patient health data has been estimated to have reached 2314 exabytes by 2020. Traditional data analysis techniques are unsuitable to extract useful information from such a vast quantity of data. Thus, intelligent data analysis methods combining human expertise and computational models for accurate and in-depth data analysis are necessary. The technological revolution and medical advances made by combining vast quantities of available data, cloud computing services, and AI-based solutions can provide expert insight and analysis on a mass scale and at a relatively low cost. Computational intelligence (CI) methods, such as fuzzy models, artificial neural networks, evolutionary algorithms, and probabilistic methods, have recently emerged as promising tools for the development and application of intelligent systems in healthcare practice. CI-based systems can learn from data and evolve according to changes in the environments by taking into account the uncertainty characterizing health data, including omics data, clinical data, sensor, and imaging data. The use of CI in healthcare can improve the processing of such data to develop intelligent solutions for prevention, diagnosis, treatment, and follow-up, as well as for the analysis of administrative processes. The present Special Issue on computational intelligence for healthcare is intended to show the potential and the practical impacts of CI techniques in challenging healthcare applications

    New algorithms for the analysis of live-cell images acquired in phase contrast microscopy

    Get PDF
    La détection et la caractérisation automatisée des cellules constituent un enjeu important dans de nombreux domaines de recherche tels que la cicatrisation, le développement de l'embryon et des cellules souches, l’immunologie, l’oncologie, l'ingénierie tissulaire et la découverte de nouveaux médicaments. Étudier le comportement cellulaire in vitro par imagerie des cellules vivantes et par le criblage à haut débit implique des milliers d'images et de vastes quantités de données. Des outils d'analyse automatisés reposant sur la vision numérique et les méthodes non-intrusives telles que la microscopie à contraste de phase (PCM) sont nécessaires. Comme les images PCM sont difficiles à analyser en raison du halo lumineux entourant les cellules et de la difficulté à distinguer les cellules individuelles, le but de ce projet était de développer des algorithmes de traitement d'image PCM dans Matlab® afin d’en tirer de l’information reliée à la morphologie cellulaire de manière automatisée. Pour développer ces algorithmes, des séries d’images de myoblastes acquises en PCM ont été générées, en faisant croître les cellules dans un milieu avec sérum bovin (SSM) ou dans un milieu sans sérum (SFM) sur plusieurs passages. La surface recouverte par les cellules a été estimée en utilisant un filtre de plage de valeurs, un seuil et une taille minimale de coupe afin d'examiner la cinétique de croissance cellulaire. Les résultats ont montré que les cellules avaient des taux de croissance similaires pour les deux milieux de culture, mais que celui-ci diminue de façon linéaire avec le nombre de passages. La méthode de transformée par ondelette continue combinée à l’analyse d'image multivariée (UWT-MIA) a été élaborée afin d’estimer la distribution de caractéristiques morphologiques des cellules (axe majeur, axe mineur, orientation et rondeur). Une analyse multivariée réalisée sur l’ensemble de la base de données (environ 1 million d’images PCM) a montré d'une manière quantitative que les myoblastes cultivés dans le milieu SFM étaient plus allongés et plus petits que ceux cultivés dans le milieu SSM. Les algorithmes développés grâce à ce projet pourraient être utilisés sur d'autres phénotypes cellulaires pour des applications de criblage à haut débit et de contrôle de cultures cellulaires.Automated cell detection and characterization is important in many research fields such as wound healing, embryo development, immune system studies, cancer research, parasite spreading, tissue engineering, stem cell research and drug research and testing. Studying in vitro cellular behavior via live-cell imaging and high-throughput screening involves thousands of images and vast amounts of data, and automated analysis tools relying on machine vision methods and non-intrusive methods such as phase contrast microscopy (PCM) are a necessity. However, there are still some challenges to overcome, since PCM images are difficult to analyze because of the bright halo surrounding the cells and blurry cell-cell boundaries when they are touching. The goal of this project was to develop image processing algorithms to analyze PCM images in an automated fashion, capable of processing large datasets of images to extract information related to cellular viability and morphology. To develop these algorithms, a large dataset of myoblasts images acquired in live-cell imaging (in PCM) was created, growing the cells in either a serum-supplemented (SSM) or a serum-free (SFM) medium over several passages. As a result, algorithms capable of computing the cell-covered surface and cellular morphological features were programmed in Matlab®. The cell-covered surface was estimated using a range filter, a threshold and a minimum cut size in order to look at the cellular growth kinetics. Results showed that the cells were growing at similar paces for both media, but their growth rate was decreasing linearly with passage number. The undecimated wavelet transform multivariate image analysis (UWT-MIA) method was developed, and was used to estimate cellular morphological features distributions (major axis, minor axis, orientation and roundness distributions) on a very large PCM image dataset using the Gabor continuous wavelet transform. Multivariate data analysis performed on the whole database (around 1 million PCM images) showed in a quantitative manner that myoblasts grown in SFM were more elongated and smaller than cells grown in SSM. The algorithms developed through this project could be used in the future on other cellular phenotypes for high-throughput screening and cell culture control applications

    Recognizing deviations from normalcy for brain tumor segmentation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 180-189).A framework is proposed for the segmentation of brain tumors from MRI. Instead of training on pathology, the proposed method trains exclusively on healthy tissue. The algorithm attempts to recognize deviations from normalcy in order to compute a fitness map over the image associated with the presence of pathology. The resulting fitness map may then be used by conventional image segmentation techniques for honing in on boundary delineation. Such an approach is applicable to structures that are too irregular, in both shape and texture, to permit construction of comprehensive training sets. We develop the method of diagonalized nearest neighbor pattern recognition, and we use it to demonstrate that recognizing deviations from normalcy requires a rich understanding of context. Therefore, we propose a framework for a Contextual Dependency Network (CDN) that incorporates context at multiple levels: voxel intensities, neighborhood coherence, intra-structure properties, inter-structure relationships, and user input. Information flows bi-directionally between the layers via multi-level Markov random fields or iterated Bayesian classification. A simple instantiation of the framework has been implemented to perform preliminary experiments on synthetic and MRI data.by David Thomas Gering.Ph.D
    corecore