53 research outputs found

    HARDWARE-ACCELERATED AUTOMATIC 3D NONRIGID IMAGE REGISTRATION

    Get PDF
    Software implementations of 3D nonrigid image registration, an essential tool in medical applications like radiotherapies and image-guided surgeries, run excessively slow on traditional computers. These algorithms can be accelerated using hardware methods by exploiting parallelism at different levels in the algorithm. We present here, an implementation of a free-form deformation-based algorithm on a field programmable gate array (FPGA) with a customized, parallel and pipelined architecture. We overcome the performance bottlenecks and gain speedups of up to 40x over traditional computers while achieving accuracies comparable to software implementations. In this work, we also present a method to optimize the deformation field using a gradient descent-based optimization scheme and solve the problem of mesh folding, commonly encountered during registration using free-form deformations, using a set of linear constraints. Finally, we present the use of novel dataflow modeling tools to automatically map registration algorithms to hardware like FPGAs while allowing for dynamic reconfiguration

    腹部CT像上の複数オブジェクトのセグメンテーションのための統計的手法に関する研究

    Get PDF
    Computer aided diagnosis (CAD) is the use of a computer-generated output as an auxiliary tool for the assistance of efficient interpretation and accurate diagnosis. Medical image segmentation has an essential role in CAD in clinical applications. Generally, the task of medical image segmentation involves multiple objects, such as organs or diffused tumor regions. Moreover, it is very unfavorable to segment these regions from abdominal Computed Tomography (CT) images because of the overlap in intensity and variability in position and shape of soft tissues. In this thesis, a progressive segmentation framework is proposed to extract liver and tumor regions from CT images more efficiently, which includes the steps of multiple organs coarse segmentation, fine segmentation, and liver tumors segmentation. Benefit from the previous knowledge of the shape and its deformation, the Statistical shape model (SSM) method is firstly utilized to segment multiple organs regions robustly. In the process of building an SSM, the correspondence of landmarks is crucial to the quality of the model. To generate a more representative prototype of organ surface, a k-mean clustering method is proposed. The quality of the SSMs, which is measured by generalization ability, specificity, and compactness, was improved. We furtherly extend the shapes correspondence to multiple objects. A non-rigid iterative closest point surface registration process is proposed to seek more properly corresponded landmarks across the multi-organ surfaces. The accuracy of surface registration was improved as well as the model quality. Moreover, to localize the abdominal organs simultaneously, we proposed a random forest regressor cooperating intensity features to predict the position of multiple organs in the CT image. The regions of the organs are substantially restrained using the trained shape models. The accuracy of coarse segmentation using SSMs was increased by the initial information of organ positions.Consequently, a pixel-wise segmentation using the classification of supervoxels is applied for the fine segmentation of multiple organs. The intensity and spatial features are extracted from each supervoxels and classified by a trained random forest. The boundary of the supervoxels is closer to the real organs than the previous coarse segmentation. Finally, we developed a hybrid framework for liver tumor segmentation in multiphase images. To deal with these issues of distinguishing and delineating tumor regions and peripheral tissues, this task is accomplished in two steps: a cascade region-based convolutional neural network (R-CNN) with a refined head is trained to locate the bounding boxes that contain tumors, and a phase-sensitive noise filtering is introduced to refine the following segmentation of tumor regions conducted by a level-set-based framework. The results of tumor detection show the adjacent tumors are successfully separated by the improved cascaded R-CNN. The accuracy of tumor segmentation is also improved by our proposed method. 26 cases of multi-phase CT images were used to validate our proposed method for the segmentation of liver tumors. The average precision and recall rates for tumor detection are 76.8% and 84.4%, respectively. The intersection over union, true positive rate, and false positive rate for tumor segmentation are 72.7%, 76.2%, and 4.75%, respectively.九州工業大学博士学位論文 学位記番号: 工博甲第546号 学位授与年月日: 令和4年3月25日1 Introduction|2 Literature Review|3 Statistical Shape Model Building|4 Multi-organ Segmentation|5 Liver Tumors Segmentation|6 Summary and Outlook九州工業大学令和3年

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Feature-based Non-rigid Registration between Pre- and Post-Contrast Lung CT Images

    Get PDF
    In this paper, a feature-based registration technique is proposed for pre-contrast and post-contrast lung CT images. It utilizes three dimensional(3-D) features with their descriptors and estimates feature correspondences by nearest neighborhood matching in the feature space. We design a transformation model between the input image pairs using a free form deformation(FFD) which is based on B-splines. Registration is achieved by minimizing an energy function incorporating the smoothness of FFD and the correspondence information through a non-linear gradient conjugate method. To deal with outliers in feature matching, our energy model integrates a robust estimator which discards outliers effectively by iteratively reducing a radius of confidence in the minimization process. Performance evaluation was carried out in terms of accuracy and efficiency using seven pairs of lung CT images of clinical practice. For a quantitative assessment, a radiologist specialized in thorax manually placed landmarks on each CT image pair. In comparative evaluation to a conventional feature-based registration method, our algorithm showed improved performances in both accuracy and efficiencyope

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    Automated analysis and visualization of preclinical whole-body microCT data

    Get PDF
    In this thesis, several strategies are presented that aim to facilitate the analysis and visualization of whole-body in vivo data of small animals. Based on the particular challenges for image processing, when dealing with whole-body follow-up data, we addressed several aspects in this thesis. The developed methods are tailored to handle data of subjects with significantly varying posture and address the large tissue heterogeneity of entire animals. In addition, we aim to compensate for lacking tissue contrast by relying on approximation of organs based on an animal atlas. Beyond that, we provide a solution to automate the combination of multimodality, multidimensional data.* Advanced School for Computing and Imaging (ASCI), Delft, NL * Bontius Stichting inz Doelfonds Beeldverwerking, Leiden, NL * Caliper Life Sciences, Hopkinton, USA * Foundation Imago, Oegstgeest, NLUBL - phd migration 201

    3D Kidney Segmentation from Abdominal Images Using Spatial-Appearance Models

    Get PDF
    Kidney segmentation is an essential step in developing any noninvasive computer-assisted diagnostic system for renal function assessment. This paper introduces an automated framework for 3D kidney segmentation from dynamic computed tomography (CT) images that integrates discriminative features from the current and prior CT appearances into a random forest classification approach. To account for CT images’ inhomogeneities, we employ discriminate features that are extracted from a higher-order spatial model and an adaptive shape model in addition to the first-order CT appearance. To model the interactions between CT data voxels, we employed a higher-order spatial model, which adds the triple and quad clique families to the traditional pairwise clique family. The kidney shape prior model is built using a set of training CT data and is updated during segmentation using not only region labels but also voxels’ appearances in neighboring spatial voxel locations. Our framework performance has been evaluated on in vivo dynamic CT data collected from 20 subjects and comprises multiple 3D scans acquired before and after contrast medium administration. Quantitative evaluation between manually and automatically segmented kidney contours using Dice similarity, percentage volume differences, and 95th-percentile bidirectional Hausdorff distances confirms the high accuracy of our approach

    Surface Reconstruction from Noisy and Sparse Data

    Get PDF
    We introduce a set of algorithms for registering, filtering and measuring the similarity of unorganized 3d point clouds, usually obtained from multiple views. We contribute a method for computing the similarity between point clouds that represent closed surfaces, specifically segmented tumors from CT scans. We obtain watertight surfaces and utilize volumetric overlap to determine similarity in a volumetric way. This similarity measure is used to quantify treatment variability based on target volume segmentation both prior to and following radiotherapy planning stages. We also contribute an algorithm for the drift-free registration of thin, non- rigid scans, where drift is the build-up of error caused by sequential pairwise registration, which is the alignment of each scan to its neighbor. We construct an average scan using mutual nearest neighbors, each scan is registered to this average scan, after which we update the average scan and continue this process until convergence. The use case herein is for merging scans of plants from multiple views and registering vascular scans together. Our final contribution is a method for filtering noisy point clouds, specif- ically those constructed from merged depth maps as obtained from a range scanner or multiple view stereo (MVS), applying techniques that have been utilized in finding outliers in clustered data, but not in MVS. We utilize ker- nel density estimation to obtain a probability density function over the space of observed points, utilizing variable bandwidths based on the nature of the neighboring points, Mahalanobis and reachability distances that is more dis- criminative than a classical Mahalanobis distance-based metric

    Medical Image Segmentation by Deep Convolutional Neural Networks

    Get PDF
    Medical image segmentation is a fundamental and critical step for medical image analysis. Due to the complexity and diversity of medical images, the segmentation of medical images continues to be a challenging problem. Recently, deep learning techniques, especially Convolution Neural Networks (CNNs) have received extensive research and achieve great success in many vision tasks. Specifically, with the advent of Fully Convolutional Networks (FCNs), automatic medical image segmentation based on FCNs is a promising research field. This thesis focuses on two medical image segmentation tasks: lung segmentation in chest X-ray images and nuclei segmentation in histopathological images. For the lung segmentation task, we investigate several FCNs that have been successful in semantic and medical image segmentation. We evaluate the performance of these different FCNs on three publicly available chest X-ray image datasets. For the nuclei segmentation task, since the challenges of this task are difficulty in segmenting the small, overlapping and touching nuclei, and limited ability of generalization to nuclei in different organs and tissue types, we propose a novel nuclei segmentation approach based on a two-stage learning framework and Deep Layer Aggregation (DLA). We convert the original binary segmentation task into a two-step task by adding nuclei-boundary prediction (3-classes) as an intermediate step. To solve our two-step task, we design a two-stage learning framework by stacking two U-Nets. The first stage estimates nuclei and their coarse boundaries while the second stage outputs the final fine-grained segmentation map. Furthermore, we also extend the U-Nets with DLA by iteratively merging features across different levels. We evaluate our proposed method on two public diverse nuclei datasets. The experimental results show that our proposed approach outperforms many standard segmentation architectures and recently proposed nuclei segmentation methods, and can be easily generalized across different cell types in various organs
    corecore