22 research outputs found

    Stroke Lesion Segmentation in FLAIR MRI Datasets Using Customized Markov Random Fields

    Get PDF
    Robust and reliable stroke lesion segmentation is a crucial step toward employing lesion volume as an independent endpoint for randomized trials. The aim of this work was to develop and evaluate a novel method to segment sub-acute ischemic stroke lesions from fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI) datasets. After preprocessing of the datasets, a Bayesian technique based on Gabor textures extracted from the FLAIR signal intensities is utilized to generate a first estimate of the lesion segmentation. Using this initial segmentation, a customized voxel-level Markov random field model based on intensity as well as Gabor texture features is employed to refine the stroke lesion segmentation. The proposed method was developed and evaluated based on 151 multi-center datasets from three different databases using a leave-one-patient-out validation approach. The comparison of the automatically segmented stroke lesions with manual ground truth segmentation revealed an average Dice coefficient of 0.582, which is in the upper range of previously presented lesion segmentation methods using multi-modal MRI datasets. Furthermore, the results obtained by the proposed technique are superior compared to the results obtained by two methods based on convolutional neural networks and three phase level-sets, respectively, which performed best in the ISLES 2015 challenge using multi-modal imaging datasets. The results of the quantitative evaluation suggest that the proposed method leads to robust lesion segmentation results using FLAIR MRI datasets only as a follow-up sequence

    On Medical Image Segmentation and on Modeling Long Term Dependencies

    Get PDF
    La délimitation (segmentation) des tumeurs malignes à partir d’images médicales est importante pour le diagnostic du cancer, la planification des traitements ciblés, ainsi que les suivis de la progression du cancer et de la réponse aux traitements. Cependant, bien que la segmentation manuelle des images médicales soit précise, elle prend du temps, nécessite des opérateurs experts et est souvent peu pratique lorsque de grands ensembles de données sont utilisés. Ceci démontre la nécessité d’une segmentation automatique. Cependant, la segmentation automatisée des tumeurs est particulièrement difficile en raison de la variabilité de l’apparence des tumeurs, de l’équipement d’acquisition d’image et des paramètres d’acquisition, et de la variabilité entre les patients. Les tumeurs varient en type, taille, emplacement et quantité; le reste de l’image varie en raison des différences anatomiques entre les patients, d’une chirurgie antérieure ou d’une thérapie ablative, de différences dans l’amélioration du contraste des tissus et des artefacts d’image. De plus, les protocoles d’acquisition du scanner varient considérablement entre les cliniques et les caractéristiques de l’image varient selon le modèle du scanner. En raison de toutes ces variabilités, un modèle de segmentation doit être suffisamment flexible pour apprendre les caractéristiques générales des données. L’avènement des réseaux profonds de neurones à convolution (convolutional neural networks, CNN) a permis une classification exacte et précise des images hautement variables et, par extension, une segmentation de haute qualité des images. Cependant, ces modèles doivent être formés sur d’énormes quantités de données étiquetées. Cette contrainte est particulièrement difficile dans le contexte de la segmentation des images médicales, car le nombre de segmentations pouvant être produites est limité dans la pratique par la nécessité d’employer des opérateurs experts pour réaliser un tel étiquetage. De plus, les variabilités d’intérêt dans les images médicales semblent suivre une distribution à longue traîne, ce qui signifie qu’un nombre particulièrement important de données utilisées pour l’entraînement peut être nécessaire pour fournir un échantillon suffisant de chaque type de variabilité à un CNN. Cela démontre la nécessité de développer des stratégies pour la formation de ces modèles avec des segmentations de vérité-terrain disponibles limitées.----------ABSTRACT: The delineation (segmentation) of malignant tumours in medical images is important for cancer diagnosis, the planning of targeted treatments, and the tracking of cancer progression and treatment response. However, although manual segmentation of medical images is accurate, it is time consuming, requires expert operators, and is often impractical with large datasets. This motivates the need for training automated segmentation. However, automated segmentation of tumours is particularly challenging due to variability in tumour appearance, image acquisition equipment and acquisition parameters, and variability across patients. Tumours vary in type, size, location, and quantity; the rest of the image varies due to anatomical differences between patients, prior surgery or ablative therapy, differences in contrast enhancement of tissues, and image artefacts. Furthermore, scanner acquisition protocols vary considerably between clinical sites and image characteristics vary according to the scanner model. Due to all of these variabilities, a segmentation model must be flexible enough to learn general features from the data. The advent of deep convolutional neural networks (CNN) allowed for accurate and precise classification of highly variable images and, by extension, of high quality segmentation images. However, these models must be trained on enormous quantities of labeled data. This constraint is particularly challenging in the context of medical image segmentation because the number of segmentations that can be produced is limited in practice by the need to employ expert operators to do such labeling. Furthermore, the variabilities of interest in medical images appear to follow a long tail distribution, meaning a particularly large amount of training data may be required to provide a sufficient sample of each type of variability to a CNN. This motivates the need to develop strategies for training these models with limited ground truth segmentations available

    Méthodes d'apprentissage automatique pour la segmentation de tumeurs au cerveau

    Get PDF
    Abstract : Malignant brain tumors are the second leading cause of cancer related deaths in children under 20. There are nearly 700,000 people in the U.S. living with a brain tumor and 17,000 people are likely to loose their lives due to primary malignant and central nervous system brain tumor every year. To identify whether a patient is diagnosed with brain tumor in a non-invasive way, an MRI scan of the brain is acquired followed by a manual examination of the scan by an expert who looks for lesions (i.e. cluster of cells which deviate from healthy tissue). For treatment purposes, the tumor and its sub-regions are outlined in a procedure known as brain tumor segmentation . Although brain tumor segmentation is primarily done manually, it is very time consuming and the segmentation is subject to variations both between observers and within the same observer. To address these issues, a number of automatic and semi-automatic methods have been proposed over the years to help physicians in the decision making process. Methods based on machine learning have been subjects of great interest in brain tumor segmentation. With the advent of deep learning methods and their success in many computer vision applications such as image classification, these methods have also started to gain popularity in medical image analysis. In this thesis, we explore different machine learning and deep learning methods applied to brain tumor segmentation.Résumé: Les tumeurs malignes au cerveau sont la deuxième cause principale de décès chez les enfants de moins de 20 ans. Il y a près de 700 000 personnes aux États-Unis vivant avec une tumeur au cerveau, et 17 000 personnes sont chaque année à risque de perdre leur vie suite à une tumeur maligne primaire dans le système nerveu central. Pour identifier de façon non-invasive si un patient est atteint d'une tumeur au cerveau, une image IRM du cerveau est acquise et analysée à la main par un expert pour trouver des lésions (c.-à-d. un groupement de cellules qui diffère du tissu sain). Une tumeur et ses régions doivent être détectées à l'aide d'une segmentation pour aider son traitement. La segmentation de tumeur cérébrale et principalement faite à la main, c'est une procédure qui demande beaucoup de temps et les variations intra et inter expert pour un même cas varient beaucoup. Pour répondre à ces problèmes, il existe beaucoup de méthodes automatique et semi-automatique qui ont été proposés ces dernières années pour aider les praticiens à prendre des décisions. Les méthodes basées sur l'apprentissage automatique ont suscité un fort intérêt dans le domaine de la segmentation des tumeurs cérébrales. L'avènement des méthodes de Deep Learning et leurs succès dans maintes applications tels que la classification d'images a contribué à mettre de l'avant le Deep Learning dans l'analyse d'images médicales. Dans cette thèse, nous explorons diverses méthodes d'apprentissage automatique et de Deep Learning appliquées à la segmentation des tumeurs cérébrales

    Deformable models for adaptive radiotherapy planning

    Get PDF
    Radiotherapy is the most widely used treatment for cancer, with 4 out of 10 cancer patients receiving radiotherapy as part of their treatment. The delineation of gross tumour volume (GTV) is crucial in the treatment of radiotherapy. An automatic contouring system would be beneficial in radiotherapy planning in order to generate objective, accurate and reproducible GTV contours. Image guided radiotherapy (IGRT) acquires patient images just before treatment delivery to allow any necessary positional correction. Consequently, real-time contouring system provides an opportunity to adopt radiotherapy on the treatment day. In this thesis, freely deformable models (FDM) and shape constrained deformable models (SCDMs) were used to automatically delineate the GTV for brain cancer and prostate cancer. Level set method (LSM) is a typical FDM which was used to contour glioma on brain MRI. A series of low level image segmentation methodologies are cascaded to form a case-wise fully automatic initialisation pipeline for the level set function. Dice similarity coefficients (DSCs) were used to evaluate the contours. Results shown a good agreement between clinical contours and LSM contours, in 93% of cases the DSCs was found to be between 60% and 80%. The second significant contribution is a novel development to the active shape model (ASM), a profile feature was selected from pre-computed texture features by minimising the Mahalanobis distance (MD) to obtain the most distinct feature for each landmark, instead of conventional image intensity. A new group-wise registration scheme was applied to solve the correspondence definition within the training data. This ASM model was used to delineated prostate GTV on CT. DSCs for this case was found between 0.75 and 0.91 with the mean DSC 0.81. The last contribution is a fully automatic active appearance model (AAM) which captures image appearance near the GTV boundary. The image appearance of inner GTV was discarded to spare the potential disruption caused by brachytherapy seeds or gold markers. This model outperforms conventional AAM at the prostate base and apex region by involving surround organs. The overall mean DSC for this case is 0.85
    corecore