331 research outputs found

    Integrated Segmentation and Interpolation of Sparse Data

    Get PDF
    This paper addresses the two inherently related problems of segmentation and interpolation of 3D and 4D sparse data by integrating integrate these stages in a level set framework. The method supports any spatial configurations of sets of 2D slices having arbitrary positions and orientations. We introduce a new level set scheme based on the interpolation of the level set function by radial basis functions. The proposed method is validated quantitatively and/or subjectively on artificial data and MRI and CT scans and is compared against the traditional sequential approach

    Integrated Segmentation and Interpolation of Sparse Data

    Get PDF
    International audienceWe address the two inherently related problems of segmentation and interpolation of 3D and 4D sparse data and propose a new method to integrate these stages in a level set framework. The interpolation process uses segmentation information rather than pixel intensities for increased robustness and accuracy. The method supports any spatial configurations of sets of 2D slices having arbitrary positions and orientations. We achieve this by introducing a new level set scheme based on the interpolation of the level set function by radial basis functions. The proposed method is validated quantitatively and/or subjectively on artificial data and MRI and CT scans, and is compared against the traditional sequential approach which interpolates the images first, using a state-of-the-art image interpolation method, and then segments the interpolated volume in 3D or 4D. In our experiments, the proposed framework yielded similar segmentation results to the sequential approach, but provided a more robust and accurate interpolation. In particular, the interpolation was more satisfactory in cases of large gaps, due to the method taking into account the global shape of the object, and it recovered better topologies at the extremities of the shapes where the objects disappear from the image slices. As a result, the complete integrated framework provided more satisfactory shape reconstructions than the sequential approach

    Segmentation of Brain Magnetic Resonance Images (MRIs): A Review

    Get PDF
    Abstract MR imaging modality has assumed an important position in studying the characteristics of soft tissues. Generally, images acquired by using this modality are found to be affected by noise, partial volume effect (PVE) and intensity nonuniformity (INU). The presence of these factors degrades the quality of the image. As a result of which, it becomes hard to precisely distinguish between different neighboring regions constituting an image. To address this problem, various methods have been proposed. To study the nature of various proposed state-of-the-art medical image segmentation methods, a review was carried out. This paper presents a brief summary of this review and attempts to analyze the strength and weaknesses of the proposed methods. The review concludes that unfortunately, none of the proposed methods has been able to independently address the problem of precise segmentation in its entirety. The paper strongly favors the use of some module for restoring pixel intensity value along with a segmentation method to produce efficient results

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Classification of Alzheimer's Disease and Mild Cognitive Impairment Using Longitudinal FDG-PET Images

    Get PDF
    RÉSUMÉ La maladie d’Alzheimer (MA) est la principale cause de maladies dégénératives et se caractérise par un début insidieux, une perte de mémoire précoce, des déficits verbaux et visuo-spatiaux (associés à la destruction des lobes temporal et pariétal), un développement progressif et une absence de signes neurologiques tôt dans l’apparition de la maladie. Aucun traitement n’est disponible en ce moment pour guérir la MA. Les traitements actuels peuvent souvent ralentir de façon significative la progression de la maladie. La capacité de diagnostiquer la MA à son stade initial a un impact majeur sur l’intervention clinique et la planification thérapeutique, réduisant ainsi les coûts associés aux soins de longue durée. La distinction entre les différents stades de la démence est essentielle afin de ralentir la progression de la MA. La différenciation entre les patients ayant la MA, une déficience cognitive légère précoce (DCLP), une déficience cognitive légère tardive (DCLT) ou un état cognitif normal (CN) est un domaine de recherche qui a suscité beaucoup d’intérêt durant la dernière décennie. Les images obtenues par tomographie par émission de positrons (TEP) font partie des meilleures méthodes accessibles pour faciliter la distinction entre ces différentes classes. Du point de vue de la neuro-imagerie, les images TEP par fluorodésoxyglucose (FDG) pour le métabolisme cérébral du glucose et pour les plaques amyloïdes (AV45) sont considérées comme des biomarqueurs ayant une puissance diagnostique élevée. Cependant, seules quelques approches ont étudié l’efficacité de considérer uniquement les zones actives localisées par la TEP à des fins de classification. La question de recherche principale de ce travail est de démontrer la capacité des images TEP à classer les résultats de façon précise et de comparer les résultats de deux méthodes d’imagerie TEP (FDG et AV45). Afin de déterminer la meilleure façon de classer les sujets dans les catégories MA, DCLP, DCLT ou CN en utilisant exclusivement les images TEP, nous proposons une procédure qui utilise les caractéristiques apprises à partir d’images TEP identifiées sémantiquement. Les machines à vecteurs de support (MVS) sont déjà utilisées pour faire de nombreuses classifications et font partie des techniques les plus utilisées pour la classification basée sur la neuro-imagerie, comme pour la MA. Les MVS linéaires et la fonction de base radiale (FBR)-MVS sont deux noyaux populaires utilisés dans notre classification. L’analyse en composante principale (ACP) est utilisée pour diminuer la taille des données suivie par les MVS linéaires qui sont une autre méthode de classification. Les forêts d’arbres décisionnels (FAD) sont aussi exécutées pour rendre les résultats obtenus par MVS comparables. L’objectif général de ce travail est de concevoir un ensemble d’outils déjà existants pour classer la MA et les différents stades de DCL. Suivant les étapes de normalisation et de prétraitement, une méthode d’enregistrement TEP-IRM ultimodale et déformable est proposée afin de fusionner l’atlas du MNI au scan TEP de chaque patient et de développer une méthode simple de segmentation basée sur l’atlas du cerveau dans le but de générer un volume étiqueté avec 10 régions d’intérêt communes. La procédure a deux approches : la première utilise l’intensité des voxels des régions d’intérêt, et la seconde, l’intensité des voxels du cerveau en entier. La méthode a été testée sur 660 sujets provenant de la base de données de l’(Alzheimer’s Disease Neuroimaging Initiative) et a été comparée à une approche qui incluait le cerveau en entier. La précision de la classification entre la MA et les CN a été mesurée à 91,7% et à 91,2% en utilisant la FBR et les FAD, respectivement, sur des données combinant les caractéristiques multirégionales des FDG-TEP des examens transversal et de suivi. Une amélioration considérable a été notée pour la précision de classification entre les DCLP et DCLT avec un taux de 72,5%. La précision de classification entre la MA et les CN en utilisant AV45-TEP avec les données combinées a été mesurée à 90,8% et à 87,9% pour la FBR et les FAD, respectivement. Cette procédure démontre le potentiel des caractéristiques multirégionales de la TEP pour améliorer l’évaluation cognitive. Les résultats observés confirment qu’il est possible de se fier uniquement aux images TEP sans ajout d’autres bio-marqueurs pour obtenir une précision de classification élevée.----------ABSTRACT Alzheimer’s disease (AD) is the most general cause of degenerative dementia, characterized by insidious onset early memory loss, language and visuospatial deficits (associated with the destruction of the temporal and parietal lobes), a progressive course, and lack of early neurological signs early in the course of disease. There is currently no absolute cure for AD but some treatments can slow down the progression of the disease in early stages of AD. The ability to diagnose AD at an early stage has a great impact on the clinical intervention and treatment planning, and hence reduces costs associated with long-term care. In addition, discrimination of different stages of dementia is crucial to slow down the progression of AD. Distinguishing patients with AD, early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and normal controls (NC) is an extremely active research area, which has garnered significant attention in the past decade. Positron emission tomography (PET) images are one of the best accessible ways to discriminate between different classes. From a neuroimaging point of view, PET images of fluorodeoxyglucose (FDG) for cerebral glucose metabolism and amyloid plaque images (AV45) are considered a highly powerful diagnostic biomarker, but few approaches have investigated the efficacy of focusing on localized PETactive areas for classification purposes. The main research question of this work is to show the ability of using PET images to achieve accurate classification results and to compare the results of two imaging methods of PET (FDG and AV45). To find the best scenario to classify our subjects into AD, EMCI, LMCI, and NC using PET images exclusively, we proposed a pipeline using learned features from semantically labelled PET images to perform group classification using four classifiers. Support vector machines (SVMs) are already applied in a wide variety of classifications, and it is one of the most popular techniques in classification based on neuroimaging like AD. Linear SVMs and radial basis function (RBF) SVMs are two common kernels used in our classification. Principal component analysis (PCA) is used to reduce the dimension of our data followed by linear SVMs, which is another method of classification. Random forest (RF) is also applied to make our SVM results comparable. The general objective of this work is to design a set of existing tools for classifying AD and different stages of MCI. Following normalization and pre-processing steps, a multi-modal PET-MRI registration method is proposed to fuse the Montreal Neurological Institute (MNI) atlas to PET images of each patient which is registered to its corresponding MRI scan, developing a simple method of segmentation based on a brain atlas generated from a fully labelled volume with 10 common regions of interest (ROIs). This pipeline can be used in two ways: (1) using voxel intensities from specific regions of interest (multi-region approach), and (2) using voxel intensities from the entire brain (whole brain approach). The method was tested on 660 subjects from the Alzheimer’s Disease Neuroimaging Initiative database and compared to a whole-brain approach. The classification accuracy of AD vs NC was measured at 91.7 % and 91.2 % when using RBF-SVM and RF, respectively, on combining both multi-region features from FDG-PET on cross-sectional and follow-up exams. A considerable improvement compare to the similar works in the EMCI vs LMCI classification accuracy was achieved at 72.5 %. The classification accuracy of AD versus NC using AV45-PET on the combined data was measured at 90.8 % and 87.9 % using RBF-SVM and RF, respectively. The pipeline demonstrates the potential of exploiting longitudinal multi-region PET features to improve cognitive assessment. We can achieve high accuracy using only PET images. This suggests that PET images are a rich source of discriminative information for this task. We note that other methods rely on the combination of multiple sources

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF
    • …
    corecore