50 research outputs found

    Efficient extraction of semantic information from medical images in large datasets using random forests

    No full text
    Large datasets of unlabelled medical images are increasingly becoming available; however only a small subset tend to be manually semantically labelled as it is a tedious and extremely time-consuming task to do for large datasets. This thesis aims to tackle the problem of efficiently extracting semantic information in the form of image segmentations and organ localisations from large datasets of unlabelled medical images. To do so, we investigate the suitability of supervoxels and random classification forests for the task. The first contribution of this thesis is a novel method for efficiently estimating coarse correspondences between pairs of images that can handle difficult cases that exhibit large variations in fields of view. The proposed methods adapts the random forest framework, which is a supervised learning algorithm, to work in an unsupervised manner by automatically generating labels for training via the use of supervoxels. The second contribution of this thesis is a method that extends our first contribution so as to be applicable efficiently on a large dataset of images. The proposed method is efficient and can be used to obtain correspondences between a large number of object-like supervoxels that are representative of organ structures in the images. The method is evaluated for the applications of organ-based image retrieval and weakly-supervised image segmentation using extremely minimal user input. While the method does not achieve image segmentation accuracies for all organs in an abdominal CT dataset compared to current fully-supervised state-of-the-art methods, it does provide a promising way for efficiently extracting and parsing a large dataset of medical images for the purpose of further processing.Open Acces

    Feature-sensitive and Adaptive Image Triangulation: A Super-pixel-based Scheme for Image Segmentation and Mesh Generation

    Get PDF
    With increasing utilization of various imaging techniques (such as CT, MRI and PET) in medical fields, it is often in great need to computationally extract the boundaries of objects of interest, a process commonly known as image segmentation. While numerous approaches have been proposed in literature on automatic/semi-automatic image segmentation, most of these approaches are based on image pixels. The number of pixels in an image can be huge, especially for 3D imaging volumes, which renders the pixel-based image segmentation process inevitably slow. On the other hand, 3D mesh generation from imaging data has become important not only for visualization and quantification but more critically for finite element based numerical simulation. Traditionally image-based mesh generation follows such a procedure as: (1) image boundary segmentation, (2) surface mesh generation from segmented boundaries, and (3) volumetric (e.g., tetrahedral) mesh generation from surface meshes. These three majors steps have been commonly treated as separate algorithms/steps and hence image information, once segmented, is not considered any more in mesh generation. In this thesis, we investigate a super-pixel based scheme that integrates both image segmentation and mesh generation into a single method, making mesh generation truly an image-incorporated approach. Our method, called image content-aware mesh generation, consists of several main steps. First, we generate a set of feature-sensitive, and adaptively distributed points from 2D grayscale images or 3D volumes. A novel image edge enhancement method via randomized shortest paths is introduced to be an optional choice to generate the features’ boundary map in mesh node generation step. Second, a Delaunay-triangulation generator (2D) or tetrahedral mesh generator (3D) is then utilized to generate a 2D triangulation or 3D tetrahedral mesh. The generated triangulation (or tetrahedralization) provides an adaptive partitioning of a given image (or volume). Each cluster of pixels within a triangle (or voxels within a tetrahedron) is called a super-pixel, which forms one of the nodes of a graph and adjacent super-pixels give an edge of the graph. A graph-cut method is then applied to the graph to define the boundary between two subsets of the graph, resulting in good boundary segmentations with high quality meshes. Thanks to the significantly reduced number of elements (super-pixels) as compared to that of pixels in an image, the super-pixel based segmentation method has tremendously improved the segmentation speed, making it feasible for real-time feature detection. In addition, the incorporation of image segmentation into mesh generation makes the generated mesh well adapted to image features, a desired property known as feature-preserving mesh generation

    Brain MR Image Segmentation: From Multi-Atlas Method To Deep Learning Models

    Get PDF
    Quantitative analysis of the brain structures on magnetic resonance (MR) images plays a crucial role in examining brain development and abnormality, as well as in aiding the treatment planning. Although manual delineation is commonly considered as the gold standard, it suffers from the shortcomings in terms of low efficiency and inter-rater variability. Therefore, developing automatic anatomical segmentation of human brain is of importance in providing a tool for quantitative analysis (e.g., volume measurement, shape analysis, cortical surface mapping). Despite a large number of existing techniques, the automatic segmentation of brain MR images remains a challenging task due to the complexity of the brain anatomical structures and the great inter- and intra-individual variability among these anatomical structures. To address the existing challenges, four methods are proposed in this thesis. The first work proposes a novel label fusion scheme for the multi-atlas segmentation. A two-stage majority voting scheme is developed to address the over-segmentation problem in the hippocampus segmentation of brain MR images. The second work of the thesis develops a supervoxel graphical model for the whole brain segmentation, in order to relieve the dependencies on complicated pairwise registration for the multi-atlas segmentation methods. Based on the assumption that pixels within a supervoxel are supposed to have the same label, the proposed method converts the voxel labeling problem to a supervoxel labeling problem which is solved by a maximum-a-posteriori (MAP) inference in Markov random field (MRF) defined on supervoxels. The third work incorporates attention mechanism into convolutional neural networks (CNN), aiming at learning the spatial dependencies between the shallow layers and the deep layers in CNN and producing an aggregation of the attended local feature and high-level features to obtain more precise segmentation results. The fourth method takes advantage of the success of CNN in computer vision, combines the strength of the graphical model with CNN, and integrates them into an end-to-end training network. The proposed methods are evaluated on public MR image datasets, such as MICCAI2012, LPBA40, and IBSR. Extensive experiments demonstrate the effectiveness and superior performance of the three proposed methods compared with the other state-of-the-art methods

    Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies

    Get PDF
    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation

    Unsupervised brain anomaly detection in MR images

    Get PDF
    Brain disorders are characterized by morphological deformations in shape and size of (sub)cortical structures in one or both hemispheres. These deformations cause deviations from the normal pattern of brain asymmetries, resulting in asymmetric lesions that directly affect the patient’s condition. Unsupervised methods aim to learn a model from unlabeled healthy images, so that an unseen image that breaks priors of this model, i.e., an outlier, is considered an anomaly. Consequently, they are generic in detecting any lesions, e.g., coming from multiple diseases, as long as these notably differ from healthy training images. This thesis addresses the development of solutions to leverage unsupervised machine learning for the detection/analysis of abnormal brain asymmetries related to anomalies in magnetic resonance (MR) images. First, we propose an automatic probabilistic-atlas-based approach for anomalous brain image segmentation. Second, we explore an automatic method for the detection of abnormal hippocampi from abnormal asymmetries based on deep generative networks and a one-class classifier. Third, we present a more generic framework to detect abnormal asymmetries in the entire brain hemispheres. Our approach extracts pairs of symmetric regions — called supervoxels — in both hemispheres of a test image under study. One-class classifiers then analyze the asymmetries present in each pair. Experimental results on 3D MR-T1 images from healthy subjects and patients with a variety of lesions show the effectiveness and robustness of the proposed unsupervised approaches for brain anomaly detection

    Automatic Segmentation of the Lumbar Spine from Medical Images

    Get PDF
    Segmentation of the lumbar spine in 3D is a necessary step in numerous medical applications, but remains a challenging problem for computational methods due to the complex and varied shape of the anatomy and the noise and other artefacts often present in the images. While manual annotation of anatomical objects such as vertebrae is often carried out with the aid of specialised software, obtaining even a single example can be extremely time-consuming. Automating the segmentation process is the only feasible way to obtain accurate and reliable segmentations on any large scale. This thesis describes an approach for automatic segmentation of the lumbar spine from medical images; specifically those acquired using magnetic resonance imaging (MRI) and computed tomography (CT). The segmentation problem is formulated as one of assigning class labels to local clustered regions of an image (called superpixels in 2D or supervoxels in 3D). Features are introduced in 2D and 3D which can be used to train a classifier for estimating the class labels of the superpixels or supervoxels. Spatial context is introduced by incorporating the class estimates into a conditional random field along with a learned pairwise metric. Inference over the resulting model can be carried out very efficiently, enabling an accurate pixel- or voxel-level segmentation to be recovered from the labelled regions. In contrast to most previous work in the literature, the approach does not rely on explicit prior shape information. It therefore avoids many of the problems associated with these methods, such as the need to construct a representative prior model of anatomical shape from training data and the approximate nature of the optimisation. The general-purpose nature of the proposed method means that it can be used to accurately segment both vertebrae and intervertebral discs from medical images without fundamental change to the model. Evaluation of the approach shows it to obtain accurate and robust performance in the presence of significant anatomical variation. The median average symmetric surface distances for 2D vertebra segmentation were 0.27mm on MRI data and 0.02mm on CT data. For 3D vertebra segmentation the median surface distances were 0.90mm on MRI data and 0.20mm on CT data. For 3D intervertebral disc segmentation a median surface distance of 0.54mm was obtained on MRI data

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Liver segmentation using 3D CT scans.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file

    腹部CT像上の複数オブジェクトのセグメンテーションのための統計的手法に関する研究

    Get PDF
    Computer aided diagnosis (CAD) is the use of a computer-generated output as an auxiliary tool for the assistance of efficient interpretation and accurate diagnosis. Medical image segmentation has an essential role in CAD in clinical applications. Generally, the task of medical image segmentation involves multiple objects, such as organs or diffused tumor regions. Moreover, it is very unfavorable to segment these regions from abdominal Computed Tomography (CT) images because of the overlap in intensity and variability in position and shape of soft tissues. In this thesis, a progressive segmentation framework is proposed to extract liver and tumor regions from CT images more efficiently, which includes the steps of multiple organs coarse segmentation, fine segmentation, and liver tumors segmentation. Benefit from the previous knowledge of the shape and its deformation, the Statistical shape model (SSM) method is firstly utilized to segment multiple organs regions robustly. In the process of building an SSM, the correspondence of landmarks is crucial to the quality of the model. To generate a more representative prototype of organ surface, a k-mean clustering method is proposed. The quality of the SSMs, which is measured by generalization ability, specificity, and compactness, was improved. We furtherly extend the shapes correspondence to multiple objects. A non-rigid iterative closest point surface registration process is proposed to seek more properly corresponded landmarks across the multi-organ surfaces. The accuracy of surface registration was improved as well as the model quality. Moreover, to localize the abdominal organs simultaneously, we proposed a random forest regressor cooperating intensity features to predict the position of multiple organs in the CT image. The regions of the organs are substantially restrained using the trained shape models. The accuracy of coarse segmentation using SSMs was increased by the initial information of organ positions.Consequently, a pixel-wise segmentation using the classification of supervoxels is applied for the fine segmentation of multiple organs. The intensity and spatial features are extracted from each supervoxels and classified by a trained random forest. The boundary of the supervoxels is closer to the real organs than the previous coarse segmentation. Finally, we developed a hybrid framework for liver tumor segmentation in multiphase images. To deal with these issues of distinguishing and delineating tumor regions and peripheral tissues, this task is accomplished in two steps: a cascade region-based convolutional neural network (R-CNN) with a refined head is trained to locate the bounding boxes that contain tumors, and a phase-sensitive noise filtering is introduced to refine the following segmentation of tumor regions conducted by a level-set-based framework. The results of tumor detection show the adjacent tumors are successfully separated by the improved cascaded R-CNN. The accuracy of tumor segmentation is also improved by our proposed method. 26 cases of multi-phase CT images were used to validate our proposed method for the segmentation of liver tumors. The average precision and recall rates for tumor detection are 76.8% and 84.4%, respectively. The intersection over union, true positive rate, and false positive rate for tumor segmentation are 72.7%, 76.2%, and 4.75%, respectively.九州工業大学博士学位論文 学位記番号: 工博甲第546号 学位授与年月日: 令和4年3月25日1 Introduction|2 Literature Review|3 Statistical Shape Model Building|4 Multi-organ Segmentation|5 Liver Tumors Segmentation|6 Summary and Outlook九州工業大学令和3年

    The Liver Tumor Segmentation Benchmark (LiTS)

    Full text link
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LITS) organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2016 and International Conference On Medical Image Computing Computer Assisted Intervention (MICCAI) 2017. Twenty four valid state-of-the-art liver and liver tumor segmentation algorithms were applied to a set of 131 computed tomography (CT) volumes with different types of tumor contrast levels (hyper-/hypo-intense), abnormalities in tissues (metastasectomie) size and varying amount of lesions. The submitted algorithms have been tested on 70 undisclosed volumes. The dataset is created in collaboration with seven hospitals and research institutions and manually reviewed by independent three radiologists. We found that not a single algorithm performed best for liver and tumors. The best liver segmentation algorithm achieved a Dice score of 0.96(MICCAI) whereas for tumor segmentation the best algorithm evaluated at 0.67(ISBI) and 0.70(MICCAI). The LITS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.Comment: conferenc
    corecore