105 research outputs found
Regmentation: A New View of Image Segmentation and Registration
Image segmentation and registration have been the two major areas of research in the medical imaging community for decades and still are. In the context of radiation oncology, segmentation and registration methods are widely used for target structure definition such as prostate or head and neck lymph node areas. In the past two years, 45% of all articles published in the most important medical imaging journals and conferences have presented either segmentation or registration methods. In the literature, both categories are treated rather separately even though they have much in common. Registration techniques are used to solve segmentation tasks (e.g. atlas based methods) and vice versa (e.g. segmentation of structures used in a landmark based registration). This article reviews the literature on image segmentation methods by introducing a novel taxonomy based on the amount of shape knowledge being incorporated in the segmentation process. Based on that, we argue that all global shape prior segmentation methods are identical to image registration methods and that such methods thus cannot be characterized as either image segmentation or registration methods. Therefore we propose a new class of methods that are able solve both segmentation and registration tasks. We call it regmentation. Quantified on a survey of the current state of the art medical imaging literature, it turns out that 25% of the methods are pure registration methods, 46% are pure segmentation methods and 29% are regmentation methods. The new view on image segmentation and registration provides a consistent taxonomy in this context and emphasizes the importance of regmentation in current medical image processing research and radiation oncology image-guided applications
Visual Quality Enhancement in Optoacoustic Tomography using Active Contour Segmentation Priors
Segmentation of biomedical images is essential for studying and
characterizing anatomical structures, detection and evaluation of pathological
tissues. Segmentation has been further shown to enhance the reconstruction
performance in many tomographic imaging modalities by accounting for
heterogeneities of the excitation field and tissue properties in the imaged
region. This is particularly relevant in optoacoustic tomography, where
discontinuities in the optical and acoustic tissue properties, if not properly
accounted for, may result in deterioration of the imaging performance.
Efficient segmentation of optoacoustic images is often hampered by the
relatively low intrinsic contrast of large anatomical structures, which is
further impaired by the limited angular coverage of some commonly employed
tomographic imaging configurations. Herein, we analyze the performance of
active contour models for boundary segmentation in cross-sectional optoacoustic
tomography. The segmented mask is employed to construct a two compartment model
for the acoustic and optical parameters of the imaged tissues, which is
subsequently used to improve accuracy of the image reconstruction routines. The
performance of the suggested segmentation and modeling approach are showcased
in tissue-mimicking phantoms and small animal imaging experiments.Comment: Accepted for publication in IEEE Transactions on Medical Imagin
Liver segmentation using 3D CT scans.
Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file
2D and 3D segmentation of medical images.
"Cardiovascular disease is one of the leading causes of the morbidity and mortality in the western world today. Many different imaging modalities are in place today to diagnose and investigate cardiovascular diseases. Each of these, however, has strengths and weaknesses. There are different forms of noise and artifacts in each image modality that combine to make the field of medical image analysis both important and challenging. The aim of this thesis is develop a reliable method for segmentation of vessel structures in medical imaging, combining the expert knowledge of the user in such a way as to maintain efficiency whilst overcoming the inherent noise and artifacts present in the images. We present results from 2D segmentation techniques using different methodologies, before developing 3D techniques for segmenting vessel shape from a series of images. The main drive of the work involves the investigation of medical images obtained using catheter based techniques, namely Intra Vascular Ultrasound (IVUS) and Optical Coherence Tomography (OCT). We will present a robust segmentation paradigm, combining both edge and region information to segment the media-adventitia, and lumenal borders in those modalities respectively. By using a semi-interactive method that utilizes "soft" constraints, allowing imprecise user input which provides a balance between using the user's expert knowledge and efficiency. In the later part of the work, we develop automatic methods for segmenting the walls of lymph vessels. These methods are employed on sequential images in order to obtain data to reconstruct the vessel walls in the region of the lymph valves. We investigated methods to segment the vessel walls both individually and simultaneously, and compared the results both quantitatively and qualitatively in order obtain the most appropriate for the 3D reconstruction of the vessel wall. Lastly, we adapt the semi-interactive method used on vessels earlier into 3D to help segment out the lymph valve. This involved the user interactive method to provide guidance to help segment the boundary of the lymph vessel, then we apply a minimal surface segmentation methodology to provide segmentation of the valve.
Segmentation and Deformable Modelling Techniques for a Virtual Reality Surgical Simulator in Hepatic Oncology
Liver surgical resection is one of the most frequently used curative therapies. However,
resectability is problematic. There is a need for a computer-assisted surgical planning and
simulation system which can accurately and efficiently simulate the liver, vessels and
tumours in actual patients. The present project describes the development of these core
segmentation and deformable modelling techniques.
For precise detection of irregularly shaped areas with indistinct boundaries, the
segmentation incorporated active contours - gradient vector flow (GVF) snakes and level sets.
To improve efficiency, a chessboard distance transform was used to replace part of the GVF
effort. To automatically initialize the liver volume detection process, a rotating template was
introduced to locate the starting slice. For shape maintenance during the segmentation
process, a simplified object shape learning step was introduced to avoid occasional
significant errors. Skeletonization with fuzzy connectedness was used for vessel
segmentation.
To achieve real-time interactivity, the deformation regime of this system was based
on a single-organ mass-spring system (MSS), which introduced an on-the-fly local mesh
refinement to raise the deformation accuracy and the mesh control quality. This method was
now extended to a multiple soft-tissue constraint system, by supplementing it with an
adaptive constraint mesh generation. A mesh quality measure was tailored based on a wide
comparison of classic measures. Adjustable feature and parameter settings were thus
provided, to make tissues of interest distinct from adjacent structures, keeping the mesh
suitable for on-line topological transformation and deformation.
More than 20 actual patient CT and 2 magnetic resonance imaging (MRI) liver
datasets were tested to evaluate the performance of the segmentation method. Instrument
manipulations of probing, grasping, and simple cutting were successfully simulated on
deformable constraint liver tissue models. This project was implemented in conjunction with
the Division of Surgery, Hammersmith Hospital, London; the preliminary reality effect was
judged satisfactory by the consultant hepatic surgeon
Segmentation and classification of lung nodules from Thoracic CT scans : methods based on dictionary learning and deep convolutional neural networks.
Lung cancer is a leading cause of cancer death in the world. Key to survival of patients is early diagnosis. Studies have demonstrated that screening high risk patients with Low-dose Computed Tomography (CT) is invaluable for reducing morbidity and mortality. Computer Aided Diagnosis (CADx) systems can assist radiologists and care providers in reading and analyzing lung CT images to segment, classify, and keep track of nodules for signs of cancer. In this thesis, we propose a CADx system for this purpose. To predict lung nodule malignancy, we propose a new deep learning framework that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to learn best in-plane and inter-slice visual features for diagnostic nodule classification. Since a nodule\u27s volumetric growth and shape variation over a period of time may reveal information regarding the malignancy of nodule, separately, a dictionary learning based approach is proposed to segment the nodule\u27s shape at two time points from two scans, one year apart. The output of a CNN classifier trained to learn visual appearance of malignant nodules is then combined with the derived measures of shape change and volumetric growth in assigning a probability of malignancy to the nodule. Due to the limited number of available CT scans of benign and malignant nodules in the image database from the National Lung Screening Trial (NLST), we chose to initially train a deep neural network on the larger LUNA16 Challenge database which was built for the purpose of eliminating false positives from detected nodules in thoracic CT scans. Discriminative features that were learned in this application were transferred to predict malignancy. The algorithm for segmenting nodule shapes in serial CT scans utilizes a sparse combination of training shapes (SCoTS). This algorithm captures a sparse representation of a shape in input data through a linear span of previously delineated shapes in a training repository. The model updates shape prior over level set iterations and captures variabilities in shapes by a sparse combination of the training data. The level set evolution is therefore driven by a data term as well as a term capturing valid prior shapes. During evolution, the shape prior influence is adjusted based on shape reconstruction, with the assigned weight determined from the degree of sparsity of the representation. The discriminative nature of sparse representation, affords us the opportunity to compare nodules\u27 variations in consecutive time points and to predict malignancy. Experimental validations of the proposed segmentation algorithm have been demonstrated on 542 3-D lung nodule data from the LIDC-IDRI database which includes radiologist delineated nodule boundaries. The effectiveness of the proposed deep learning and dictionary learning architectures for malignancy prediction have been demonstrated on CT data from 370 biopsied subjects collected from the NLST database. Each subject in this database had at least two serial CT scans at two separate time points one year apart. The proposed RNN CAD system achieved an ROC Area Under the Curve (AUC) of 0.87, when validated on CT data from nodules at second sequential time point and 0.83 based on dictionary learning method; however, when nodule shape change and appearance were combined, the classifier performance improved to AUC=0.89
Computational methods to predict and enhance decision-making with biomedical data.
The proposed research applies machine learning techniques to healthcare applications. The core ideas were using intelligent techniques to find automatic methods to analyze healthcare applications. Different classification and feature extraction techniques on various clinical datasets are applied. The datasets include: brain MR images, breathing curves from vessels around tumor cells during in time, breathing curves extracted from patients with successful or rejected lung transplants, and lung cancer patients diagnosed in US from in 2004-2009 extracted from SEER database. The novel idea on brain MR images segmentation is to develop a multi-scale technique to segment blood vessel tissues from similar tissues in the brain. By analyzing the vascularization of the cancer tissue during time and the behavior of vessels (arteries and veins provided in time), a new feature extraction technique developed and classification techniques was used to rank the vascularization of each tumor type. Lung transplantation is a critical surgery for which predicting the acceptance or rejection of the transplant would be very important. A review of classification techniques on the SEER database was developed to analyze the survival rates of lung cancer patients, and the best feature vector that can be used to predict the most similar patients are analyzed
Trans-Rectal Optical Tomography Reconstruction Using 3-Dimensional Spatial Prior Extracted From Sparse 2-Dimensional Trans-Rectal Ultrasound Imagery
Accurate prostate segmentation in trans-rectal ultrasound (TRUS) imagery is an important step in different clinical applications, and it is particularly necessary for providing a 3-dimensional spatial prior to guide the image reconstruction of trans-rectal optical tomography for prostate cancer detection. Utilizing the US prior to guide near infrared tomography reconstruction could be performed by direct segmentation of the US image. Therefore, 2-dimensional segmentation of the axial TRUS images are performed extensively, however, 2-dimensional segmentation of the sagittal TRUS images are challenging, due to more complexities in contrast, morphological features and image artifacts, as well as significant inter-subject variations of the prostate shape and size. We develop a routine of segmenting 2-dimensional TRUS images obtained from canine prostate, based on the combination of a Snake's algorithm and selected manual segmentation. The segmentations obtained from a sparse set of axial and sagittal images are aligned to form the 3-dimensional contour of a prostate. The resulted prostate profile is implemented as the spatial prior to constrain image reconstruction of trans-rectal optical tomography. The trans-rectal optical tomography images reconstructed with the prostate profile prior are compared with those reconstructed without any spatial prior by monitoring oxygen saturation (StO2) and total hemoglobin concentration ([HbT]) in lesions of a canine prostate.Electrical Engineerin
Recommended from our members
Development of computer-based algorithms for unsupervised assessment of radiotherapy contouring
INTRODUCTION: Despite the advances in radiotherapy treatment delivery, target volume
delineation remains one of the greatest sources of error in the radiotherapy delivery process,
which can lead to poor tumour control probability and impact clinical outcome. Contouring
assessments are performed to ensure high quality of target volume definition in clinical trials
but this can be subjective and labour-intensive.
This project addresses the hypothesis that computational segmentation techniques, with a given
prior, can be used to develop an image-based tumour delineation process for contour
assessments. This thesis focuses on the exploration of the segmentation techniques to develop
an automated method for generating reference delineations in the setting of advanced lung
cancer. The novelty of this project is in the use of the initial clinician outline as a prior for
image segmentation.
METHODS: Automated segmentation processes were developed for stage II and III non-small
cell lung cancer using the IDEAL-CRT clinical trial dataset. Marker-controlled watershed
segmentation, two active contour approaches (edge- and region-based) and graph-cut applied
on superpixels were explored. k-nearest neighbour (k-NN) classification of tumour from
normal tissues based on texture features was also investigated.
RESULTS: 63 cases were used for development and training. Segmentation and classification
performance were evaluated on an independent test set of 16 cases. Edge-based active contour
segmentation achieved highest Dice similarity coefficient of 0.80 ± 0.06, followed by graphcut
at 0.76 ± 0.06, watershed at 0.72 ± 0.08 and region-based active contour at 0.71 ± 0.07,
with mean computational times of 192 ± 102 sec, 834 ± 438 sec, 21 ± 5 sec and 45 ± 18 sec
per case respectively. Errors in accuracy of irregularly shaped lesions and segmentation
leakages at the mediastinum were observed.
In the distinction of tumour and non-tumour regions, misclassification errors of 14.5% and
15.5% were achieved using 16- and 8-pixel regions of interest (ROIs) respectively. Higher
misclassification errors of 24.7% and 26.9% for 16- and 8-pixel ROIs were obtained in the
analysis of the tumour boundary.
CONCLUSIONS: Conventional image-based segmentation techniques with the application of
priors are useful in automatic segmentation of tumours, although further developments are
required to improve their performance. Texture classification can be useful in distinguishing
tumour from non-tumour tissue, but the segmentation task at the tumour boundary is more
difficult. Future work with deep-learning segmentation approaches need to be explored.Funded by National Radiotherapy Trials Quality Assurance (RTTQA) grou
- …