8 research outputs found

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Semi-supervised learning with natural language processing for right ventricle classification in echocardiography - a scalable approach

    Get PDF
    We created a deep learning model, trained on text classified by natural language processing (NLP), to assess right ventricular (RV) size and function from echocardiographic images. We included 12,684 examinations with corresponding written reports for text classification. After manual annotation of 1489 reports, we trained an NLP model to classify the remaining 10,651 reports. A view classifier was developed to select the 4-chamber or RV-focused view from an echocardiographic examination (n\ua0=\ua0539). The final models were two image classification models trained on the predicted labels from the combined manual annotation and NLP models and the corresponding echocardiographic view to assess RV function (training set\ua0n\ua0=\ua011,008) and size (training set\ua0n\ua0=\ua09951. The text classifier identified impaired RV function with 99% sensitivity and 98% specificity and RV enlargement with 98% sensitivity and 98% specificity. The view classification model identified the 4-chamber view with 92% accuracy and the RV-focused view with 73% accuracy. The image classification models identified impaired RV function with 93% sensitivity and 72% specificity and an enlarged RV with 80% sensitivity and 85% specificity; agreement with the written reports was substantial (both κ\ua0=\ua00.65). Our findings show that models for automatic image assessment can be trained to classify RV size and function by using model-annotated data from written echocardiography reports. This pipeline for auto-annotation of the echocardiographic images, using a NLP model with medical reports as input, can be used to train an image-assessment model without manual annotation of images and enables fast and inexpensive expansion of the training dataset when needed

    Development of a novel method to measure bone marrow fat fraction in older women using high-resolution peripheral quantitative computed tomography

    Get PDF
    Bone marrow adipose tissue (BMAT) has been implicated in a number of conditions associated with bone deterioration and osteoporosis. Several studies have found an inverse relationship between BMAT and bone mineral density (BMD), and higher levels of BMAT in those with prevalent fracture. Magnetic resonance imaging (MRI) is the gold standard for measuring BMAT, but its use is limited by high costs and low availability. We hypothesized that BMAT could also be accurately quantified using high-resolution peripheral quantitative computed tomography (HR-pQCT). Methods: In the present study, a novel method to quantify the tibia bone marrow fat fraction, defined by MRI, using HR-pQCT was developed. In total, 38 postmenopausal women (mean [standard deviation] age 75.9 [3.1] years) were included and measured at the same site at the distal (n = 38) and ultradistal (n = 18) tibia using both MRI and HR-pQCT. To adjust for partial volume effects, the HR-pQCT images underwent 0 to 10 layers of voxel peeling to remove voxels adjacent to the bone. Linear regression equations were then tested for different degrees of voxel peeling, using the MRI-derived fat fractions as the dependent variable and the HR-pQCT-derived radiodensity as the independent variables. Results: The most optimal HR-pQCT derived model, which applied a minimum of 4 layers of peeled voxel and with more than 1% remaining marrow volume, was able to explain 76% of the variation in the ultradistal tibia bone marrow fat fraction, measured with MRI (p < 0.001). Conclusion: The novel HR-pQCT method, developed to estimate BMAT, was able to explain a substantial part of the variation in the bone marrow fat fraction and can be used in future studies investigating the role of BMAT in osteoporosis and fracture prediction

    Improving Multi-Atlas Segmentation Methods for Medical Images

    No full text
    Semantic segmentation of organs or tissues, i.e. delineating anatomically or physiologically meaningful boundaries, is an essential task in medical image analysis. One particular class of automatic segmentation algorithms has proved to excel at a diverse set of medical applications, namely multi-atlas segmentation. However, these multi-atlas methods exhibit several issues recognized in the literature. Firstly, multi-atlas segmentation requires several computationally expensive image registrations. In addition, the registration procedure needs to be executed with a high accuracy in order to enable competitive segmentation results. Secondly, up-to-date multi-atlas frameworks require large sets of labelled data to model all possible anatomical variations. Unfortunately, acquisition of manually annotated medical data is time-consuming which needless to say limits the applicability. Finally, standard multi-atlas approaches pose no explicit constraints on the output shape and thus allow for implausibly segmented anatomies. This thesis includes four papers addressing the difficulties associated with multi-atlas segmentation in several ways; by speeding up and increasing the accuracy of feature-based registration methods, by incorporating explicit shape models into the label fusion framework using robust optimization techniques and by refining the solutions with means of machine learning algorithms, such as random decision forests and convolutional neural networks, taking both performance and data-efficiency into account. The proposed improvements are evaluated on three medical segmentation tasks with vastly different characteristics; pericardium segmentation in cardiac CTA images, region parcellation in brain MRI and multi-organ segmentation in whole-body CT images. Extensive experimental comparisons to previously published methods show promising results on par or better than state-of-the-art as of date

    Shape-aware label fusion for multi-atlas frameworks

    No full text
    Despite of having no explicit shape model, multi-atlas approaches to image segmentation have proved to be a top-performer for several diverse datasets and imaging modalities. In this paper, we show how one can directly incorporate shape regularization into the multi-atlas framework. Unlike traditional multi-atlas methods, our proposed approach does not rely on label fusion on the voxel level. Instead, each registered atlas is viewed as an estimate of the position of a shape model. We evaluate and compare our method on two public benchmarks: (i) the VISCERAL Grand Challenge on multi-organ segmentation of whole-body CT images and (ii) the Hammers brain atlas of MR images for segmenting the hippocampus and the amygdala. For this wide spectrum of both easy and hard segmentation tasks, our experimental quantitative results are on par or better than state-of-the-art. More importantly, we obtain qualitatively better segmentation boundaries, for instance, preserving topology and fine structures.Despite of having no explicit shape model, multi-atlas approaches to image segmentation have proved to be a top-performer for several diverse datasets and imaging modalities. In this paper, we show how one can directly incorporate shape regularization into the multi-atlas framework. Unlike traditional multi-atlas methods, our proposed approach does not rely on label fusion on the voxel level. Instead, each registered atlas is viewed as an estimate of the position of a shape model. We evaluate and compare our method on two public benchmarks: (i) the VISCERAL Grand Challenge on multi-organ segmentation of whole-body CT images and (ii) the Hammers brain atlas of MR images for segmenting the hippocampus and the amygdala. For this wide spectrum of both easy and hard segmentation tasks, our experimental quantitative results are on par or better than state-of-the-art. More importantly, we obtain qualitatively better segmentation boundaries, for instance, preserving topology and fine structures

    cberatlas: Fast and robust registration for multi-atlas segmentation

    No full text
    Multi-atlas segmentation has become a frequently used tool for medical image segmentation due to its outstanding performance. A computational bottleneck is that all atlas images need to be registered to a new target image. In this paper, we propose an intermediate representation of the whole atlas set – an \ufcberatlas – that can be used to speed up the registration process. The representation consists of feature points that are similar and detected consistently throughout the atlas set. A novel feature-based registration method is presented which uses the \ufcberatlas to simultaneously and robustly find correspondences and affine transformations to all atlas images. The method is evaluated on 20 CT images of the heart and 30 MR images of the brain with corresponding ground truth. Our approach succeeds in producing better and more robust segmentation results compared to three baseline methods, two intensity-based and one feature-based, and significantly reduces the running times

    Shape-aware multi-atlas segmentation

    No full text
    Despite of having no explicit shape model, multi-atlas approaches to image segmentation have proved to be a top-performer for several diverse datasets and imaging modalities. In this paper, we show how one can directly incorporate shape regularization into the multi-atlas framework. Unlike traditional methods, our proposed approach does not rely on label fusion on the voxel level. Instead, each registered atlas is viewed as an estimate of the position of a shape model. We evaluate and compare our method on two public benchmarks: (i) the VISCERAL Grand Challenge on multi-organ segmentation of whole-body CT images and (ii) the Hammers brain atlas of MR images for segmenting the hippocampus and the amygdala. For this wide spectrum of both easy and hard segmentation tasks, our experimental quantitative results are on par or better than state-of-the-art. More importantly, we obtain qualitatively better segmentation boundaries, for instance, preserving fine structures

    Good Features for Reliable Registration in Multi-Atlas Segmentation

    No full text
    This work presents a method for multi-organ segmentation in whole-body CT images based on a multi-atlas approach. A robust and efficient feature-based registration technique is developed which uses sparse organ specific features that are learnt based on their ability to register different organ types accurately. The best fitted feature points are used in RANSAC to estimate an affine transformation, followed by a thin plate spline refinement. This yields an accurate and reliable nonrigid transformation for each organ, which is independent of initialization and hence does not suffer from the local minima problem. Further, this is accomplished at a fraction of the time required by intensity-based methods. The technique is embedded into a standard multi-atlas framework using label transfer and fusion, followed by a random forest classifier which produces the data term for the final graph cut segmentation. For a majority of the classes our approach outperforms the competitors at the VISCERAL Anatomy Grand Challenge on segmentation at ISBI 2015
    corecore