1,329 research outputs found

    Automated motion analysis of bony joint structures from dynamic computer tomography images: A multi-atlas approach

    Get PDF
    Dynamic computer tomography (CT) is an emerging modality to analyze in-vivo joint kinematics at the bone level, but it requires manual bone segmentation and, in some instances, landmark identification. The objective of this study is to present an automated workflow for the assessment of three-dimensional in vivo joint kinematics from dynamic musculoskeletal CT images. The proposed method relies on a multi-atlas, multi-label segmentation and landmark propagation framework to extract bony structures and detect anatomical landmarks on the CT dataset. The segmented structures serve as regions of interest for the subsequent motion estimation across the dynamic sequence. The landmarks are propagated across the dynamic sequence for the construction of bone embedded reference frames from which kinematic parameters are estimated. We applied our workflow on dynamic CT images obtained from 15 healthy subjects on two different joints: thumb base (n = 5) and knee (n = 10). The proposed method resulted in segmentation accuracies of 0.90 ± 0.01 for the thumb dataset and 0.94 ± 0.02 for the knee as measured by the Dice score coefficient. In terms of motion estimation, mean differences in cardan angles between the automated algorithm and manual segmentation, and landmark identification performed by an expert were below 1◦. Intraclass correlation (ICC) between cardan angles from the algorithm and results from expert manual landmarks ranged from 0.72 to 0.99 for all joints across all axes. The proposed automated method resulted in reproducible and reliable measurements, enabling the assessment of joint kinematics using 4DCT in clinical routine

    Automated Motion Analysis of Bony Joint Structures from Dynamic Computer Tomography Images: A Multi-Atlas Approach

    Get PDF
    Dynamic computer tomography (CT) is an emerging modality to analyze in-vivo joint kinematics at the bone level, but it requires manual bone segmentation and, in some instances, landmark identification. The objective of this study is to present an automated workflow for the assessment of three-dimensional in vivo joint kinematics from dynamic musculoskeletal CT images. The proposed method relies on a multi-atlas, multi-label segmentation and landmark propagation framework to extract bony structures and detect anatomical landmarks on the CT dataset. The segmented structures serve as regions of interest for the subsequent motion estimation across the dynamic sequence. The landmarks are propagated across the dynamic sequence for the construction of bone embedded reference frames from which kinematic parameters are estimated. We applied our workflow on dynamic CT images obtained from 15 healthy subjects on two different joints: thumb base (n = 5) and knee (n = 10). The proposed method resulted in segmentation accuracies of 0.90 ± 0.01 for the thumb dataset and 0.94 ± 0.02 for the knee as measured by the Dice score coefficient. In terms of motion estimation, mean differences in cardan angles between the automated algorithm and manual segmentation, and landmark identification performed by an expert were below 1◦. Intraclass correlation (ICC) between cardan angles from the algorithm and results from expert manual landmarks ranged from 0.72 to 0.99 for all joints across all axes. The proposed automated method resulted in reproducible and reliable measurements, enabling the assessment of joint kinematics using 4DCT in clinical routine

    Three-dimensional reconstruction of the tissue-specific multielemental distribution within Ceriodaphnia dubia via multimodal registration using laser ablation ICP-mass spectrometry and X-ray spectroscopic techniques

    Get PDF
    In this work, the three-dimensional elemental, distribution profile within the freshwater crustacean Ceriodaphnia dubia was constructed at a spatial resolution down to S mu m via a data, fusion approach employing state-of-the-art laser ablation inductively coupled plasma-time-of-flight mass spectrometry (LAICP-TOFMS) and laboratory-based absorption microcomputed tomography (mu-CT). C. dubia was exposed to elevated Cu, Ni, and Zn concentrations, chemically fixed, dehydrated, stained, and embedded, prior to mu-CT analysis. Subsequently, the sample was cut into 5 pm thin sections that were subjected to LA-ICPTOFMS imaging. Multimodal image registration was performed to spatially align the 2D LA-ICP-TOFMS images relative to the Corresponding slices of the 3D mu-CT reconstruction. Mass channels corresponding to the isotopes of a single element were merged to improve the signal-to-noise ratios within the elemental images. In order to aid the visual interpretation of the data, LA-ICP-TOEMS data wete projected onto the mu-CT voxels representing tissue. Additionally, the image resolution and elemental sensitivity were compared to those obtained with synchrotron radiation based 3D confocal mu-X-ray fluorescence imaging upon a chemically fixed and air-dried C. dubia specimen

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    Registration and Modeling from Spaced and Misaligned Image Volumes

    Get PDF
    We present an integrated registration, segmentation, and shape interpolation framework to model objects from 3D and 4D volumes made up of spaced and misaligned slices having arbitrary relative positions. The framework was validated on artificial data and tested on real MRI and CT scans. The complete framework performed significantly better than the sequential approach of registration followed by segmentation and shape interpo- lation

    A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration

    Full text link
    Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for \textit{unsupervised} affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.Comment: Accepted: Medical Image Analysis - Elsevie

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    Integrated Segmentation and Interpolation of Sparse Data

    Get PDF
    International audienceWe address the two inherently related problems of segmentation and interpolation of 3D and 4D sparse data and propose a new method to integrate these stages in a level set framework. The interpolation process uses segmentation information rather than pixel intensities for increased robustness and accuracy. The method supports any spatial configurations of sets of 2D slices having arbitrary positions and orientations. We achieve this by introducing a new level set scheme based on the interpolation of the level set function by radial basis functions. The proposed method is validated quantitatively and/or subjectively on artificial data and MRI and CT scans, and is compared against the traditional sequential approach which interpolates the images first, using a state-of-the-art image interpolation method, and then segments the interpolated volume in 3D or 4D. In our experiments, the proposed framework yielded similar segmentation results to the sequential approach, but provided a more robust and accurate interpolation. In particular, the interpolation was more satisfactory in cases of large gaps, due to the method taking into account the global shape of the object, and it recovered better topologies at the extremities of the shapes where the objects disappear from the image slices. As a result, the complete integrated framework provided more satisfactory shape reconstructions than the sequential approach

    Generation of annotated multimodal ground truth datasets for abdominal medical image registration

    Full text link
    Sparsity of annotated data is a major limitation in medical image processing tasks such as registration. Registered multimodal image data are essential for the diagnosis of medical conditions and the success of interventional medical procedures. To overcome the shortage of data, we present a method that allows the generation of annotated multimodal 4D datasets. We use a CycleGAN network architecture to generate multimodal synthetic data from the 4D extended cardiac-torso (XCAT) phantom and real patient data. Organ masks are provided by the XCAT phantom, therefore the generated dataset can serve as ground truth for image segmentation and registration. Realistic simulation of respiration and heartbeat is possible within the XCAT framework. To underline the usability as a registration ground truth, a proof of principle registration is performed. Compared to real patient data, the synthetic data showed good agreement regarding the image voxel intensity distribution and the noise characteristics. The generated T1-weighted magnetic resonance imaging (MRI), computed tomography (CT), and cone beam CT (CBCT) images are inherently co-registered. Thus, the synthetic dataset allowed us to optimize registration parameters of a multimodal non-rigid registration, utilizing liver organ masks for evaluation. Our proposed framework provides not only annotated but also multimodal synthetic data which can serve as a ground truth for various tasks in medical imaging processing. We demonstrated the applicability of synthetic data for the development of multimodal medical image registration algorithms.Comment: 12 pages, 5 figures. This work has been published in the International Journal of Computer Assisted Radiology and Surgery volum
    • …
    corecore