1,657 research outputs found

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    DeepATLAS: One-Shot Localization for Biomedical Data

    Full text link
    This paper introduces the DeepATLAS foundational model for localization tasks in the domain of high-dimensional biomedical data. Upon convergence of the proposed self-supervised objective, a pretrained model maps an input to an anatomically-consistent embedding from which any point or set of points (e.g., boxes or segmentations) may be identified in a one-shot or few-shot approach. As a representative benchmark, a DeepATLAS model pretrained on a comprehensive cohort of 51,000+ unlabeled 3D computed tomography exams yields high one-shot segmentation performance on over 50 anatomic structures across four different external test sets, either matching or exceeding the performance of a standard supervised learning model. Further improvements in accuracy can be achieved by adding a small amount of labeled data using either a semisupervised or more conventional fine-tuning strategy.Comment: 18 page

    Improving patient-specific assessments of regional aortic mechanics via quantitative magnetic resonance imaging with early applications in patients at elevated risk for thoracic aortopathy

    Get PDF
    Unstable aortic aneurysms and dissections are serious cardiovascular conditions associated with high mortality. The current gold standards for assessment of stability, however, rely on simple geometric measurements, like cross-sectional area or increased diameter between follow-up scans, and fail to incorporate information about underlying aortic mechanics. Displacement encoding with stimulated echoes (DENSE) magnetic resonance imaging (MRI) has been used previously to determine heterogeneous circumferential strain patterns in the aortas of healthy volunteers. Here, I introduce technical improvements to DENSE aortic analysis and early pilot application in patients at higher risk for the development of aortopathies. Modifications to the DENSE aortic postprocessing method involving the separate spatial smoothing of the inner and outer layers of the aortic wall allowed for the preservation of radial and shear strains without impacting circumferential strain calculations. The implementation of a semiautomatic segmentation approach utilizing the intrinsic kinematic information provided by DENSE MRI reduced lengthy post-processing times while generating circumferential strain distributions with good agreement to a manually generated benchmark. Finally, a new analysis pipeline for the combined use and spatial correlation of 4D phase-contrast MRI alongside DENSE MRI to quantify both regional fluid and solid mechanics in the descending aorta is explored in a limited pilot study

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    Multi-Contrast Computed Tomography Atlas of Healthy Pancreas

    Full text link
    With the substantial diversity in population demographics, such as differences in age and body composition, the volumetric morphology of pancreas varies greatly, resulting in distinctive variations in shape and appearance. Such variations increase the difficulty at generalizing population-wide pancreas features. A volumetric spatial reference is needed to adapt the morphological variability for organ-specific analysis. Here, we proposed a high-resolution computed tomography (CT) atlas framework specifically optimized for the pancreas organ across multi-contrast CT. We introduce a deep learning-based pre-processing technique to extract the abdominal region of interests (ROIs) and leverage a hierarchical registration pipeline to align the pancreas anatomy across populations. Briefly, DEEDs affine and non-rigid registration are performed to transfer patient abdominal volumes to a fixed high-resolution atlas template. To generate and evaluate the pancreas atlas template, multi-contrast modality CT scans of 443 subjects (without reported history of pancreatic disease, age: 15-50 years old) are processed. Comparing with different registration state-of-the-art tools, the combination of DEEDs affine and non-rigid registration achieves the best performance for the pancreas label transfer across all contrast phases. We further perform external evaluation with another research cohort of 100 de-identified portal venous scans with 13 organs labeled, having the best label transfer performance of 0.504 Dice score in unsupervised setting. The qualitative representation (e.g., average mapping) of each phase creates a clear boundary of pancreas and its distinctive contrast appearance. The deformation surface renderings across scales (e.g., small to large volume) further illustrate the generalizability of the proposed atlas template

    Segmentation and Fracture Detection in CT Images for Traumatic Pelvic Injuries

    Get PDF
    In recent decades, more types and quantities of medical data have been collected due to advanced technology. A large number of significant and critical information is contained in these medical data. High efficient and automated computational methods are urgently needed to process and analyze all available medical data in order to provide the physicians with recommendations and predictions on diagnostic decisions and treatment planning. Traumatic pelvic injury is a severe yet common injury in the United States, often caused by motor vehicle accidents or fall. Information contained in the pelvic Computed Tomography (CT) images is very important for assessing the severity and prognosis of traumatic pelvic injuries. Each pelvic CT scan includes a large number of slices. Meanwhile, each slice contains a large quantity of data that may not be thoroughly and accurately analyzed via simple visual inspection with the desired accuracy and speed. Hence, a computer-assisted pelvic trauma decision-making system is needed to assist physicians in making accurate diagnostic decisions and determining treatment planning in a short period of time. Pelvic bone segmentation is a vital step in analyzing pelvic CT images and assisting physicians with diagnostic decisions in traumatic pelvic injuries. In this study, a new hierarchical segmentation algorithm is proposed to automatically extract multiplelevel bone structures using a combination of anatomical knowledge and computational techniques. First, morphological operations, image enhancement, and edge detection are performed for preliminary bone segmentation. The proposed algorithm then uses a template-based best shape matching method that provides an entirely automated segmentation process. This is followed by the proposed Registered Active Shape Model (RASM) algorithm that extracts pelvic bone tissues using more robust training models than the Standard ASM algorithm. In addition, a novel hierarchical initialization process for RASM is proposed in order to address the shortcoming of the Standard ASM, i.e. high sensitivity to initialization. Two suitable measures are defined to evaluate the segmentation results: Mean Distance and Mis-segmented Area to quantify the segmentation accuracy. Successful segmentation results indicate effectiveness and robustness of the proposed algorithm. Comparison of segmentation performance is also conducted using both the proposed method and the Snake method. A cross-validation process is designed to demonstrate the effectiveness of the training models. 3D pelvic bone models are built after pelvic bone structures are segmented from consecutive 2D CT slices. Automatic and accurate detection of the fractures from segmented bones in traumatic pelvic injuries can help physicians detect the severity of injuries in patients. The extraction of fracture features (such as presence and location of fractures) as well as fracture displacement measurement, are vital for assisting physicians in making faster and more accurate decisions. In this project, after bone segmentation, fracture detection is performed using a hierarchical algorithm based on wavelet transformation, adaptive windowing, boundary tracing and masking. Also, a quantitative measure of fracture severity based on pelvic CT scans is defined and explored. The results are promising, demonstrating that the proposed method not only capable of automatically detecting both major and minor fractures, but also has potentials to be used for clinical applications

    Machine Learning in Medical Image Analysis

    Get PDF
    Machine learning is playing a pivotal role in medical image analysis. Many algorithms based on machine learning have been applied in medical imaging to solve classification, detection, and segmentation problems. Particularly, with the wide application of deep learning approaches, the performance of medical image analysis has been significantly improved. In this thesis, we investigate machine learning methods for two key challenges in medical image analysis: The first one is segmentation of medical images. The second one is learning with weak supervision in the context of medical imaging. The first main contribution of the thesis is a series of novel approaches for image segmentation. First, we propose a framework based on multi-scale image patches and random forests to segment small vessel disease (SVD) lesions on computed tomography (CT) images. This framework is validated in terms of spatial similarity, estimated lesion volumes, visual score ratings and was compared with human experts. The results showed that the proposed framework performs as well as human experts. Second, we propose a generic convolutional neural network (CNN) architecture called the DRINet for medical image segmentation. The DRINet approach is robust in three different types of segmentation tasks, which are multi-class cerebrospinal fluid (CSF) segmentation on brain CT images, multi-organ segmentation on abdomen CT images, and multi-class tumour segmentation on brain magnetic resonance (MR) images. Finally, we propose a CNN-based framework to segment acute ischemic lesions on diffusion weighted (DW)-MR images, where the lesions are highly variable in terms of position, shape, and size. Promising results were achieved on a large clinical dataset. The second main contribution of the thesis is two novel strategies for learning with weak supervision. First, we propose a novel strategy called context restoration to make use of the images without annotations. The context restoration strategy is a proxy learning process based on the CNN, which extracts semantic features from images without using annotations. It was validated on classification, localization, and segmentation problems and was superior to existing strategies. Second, we propose a patch-based framework using multi-instance learning to distinguish normal and abnormal SVD on CT images, where there are only coarse-grained labels available. Our framework was observed to work better than classic methods and clinical practice.Open Acces
    corecore