607 research outputs found
Segmentation of ultrasound images of thyroid nodule for assisting fine needle aspiration cytology
The incidence of thyroid nodule is very high and generally increases with the
age. Thyroid nodule may presage the emergence of thyroid cancer. The thyroid
nodule can be completely cured if detected early. Fine needle aspiration
cytology is a recognized early diagnosis method of thyroid nodule. There are
still some limitations in the fine needle aspiration cytology, and the
ultrasound diagnosis of thyroid nodule has become the first choice for
auxiliary examination of thyroid nodular disease. If we could combine medical
imaging technology and fine needle aspiration cytology, the diagnostic rate of
thyroid nodule would be improved significantly. The properties of ultrasound
will degrade the image quality, which makes it difficult to recognize the edges
for physicians. Image segmentation technique based on graph theory has become a
research hotspot at present. Normalized cut (Ncut) is a representative one,
which is suitable for segmentation of feature parts of medical image. However,
how to solve the normalized cut has become a problem, which needs large memory
capacity and heavy calculation of weight matrix. It always generates over
segmentation or less segmentation which leads to inaccurate in the
segmentation. The speckle noise in B ultrasound image of thyroid tumor makes
the quality of the image deteriorate. In the light of this characteristic, we
combine the anisotropic diffusion model with the normalized cut in this paper.
After the enhancement of anisotropic diffusion model, it removes the noise in
the B ultrasound image while preserves the important edges and local details.
This reduces the amount of computation in constructing the weight matrix of the
improved normalized cut and improves the accuracy of the final segmentation
results. The feasibility of the method is proved by the experimental results.Comment: 15pages,13figure
Segmentation of skin lesions in 2D and 3D ultrasound images using a spatially coherent generalized Rayleigh mixture model
This paper addresses the problem of jointly estimating the statistical distribution and segmenting lesions in multiple-tissue high-frequency skin ultrasound images. The distribution of multiple-tissue images is modeled as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by enforcing local dependence between the mixture components. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. More precisely, a hybrid Metropolis-within-Gibbs sampler is used to draw samples that are asymptotically distributed according to the posterior distribution of the Bayesian model. The Bayesian estimators of the model parameters are then computed from the generated samples. Simulation results are conducted on synthetic data to illustrate the performance of the proposed estimation strategy. The method is then successfully applied to the segmentation of in vivo skin tumors in high-frequency 2-D and 3-D ultrasound images
Deep Networks Based Energy Models for Object Recognition from Multimodality Images
Object recognition has been extensively investigated in computer vision area, since it is a fundamental and essential technique in many important applications, such as robotics, auto-driving, automated manufacturing, and security surveillance. According to the selection criteria, object recognition mechanisms can be broadly categorized into object proposal and classification, eye fixation prediction and saliency object detection. Object proposal tends to capture all potential objects from natural images, and then classify them into predefined groups for image description and interpretation. For a given natural image, human perception is normally attracted to the most visually important regions/objects. Therefore, eye fixation prediction attempts to localize some interesting points or small regions according to human visual system (HVS). Based on these interesting points and small regions, saliency object detection algorithms propagate the important extracted information to achieve a refined segmentation of the whole salient objects. In addition to natural images, object recognition also plays a critical role in clinical practice. The informative insights of anatomy and function of human body obtained from multimodality biomedical images such as magnetic resonance imaging (MRI), transrectal ultrasound (TRUS), computed tomography (CT) and positron emission tomography (PET) facilitate the precision medicine. Automated object recognition from biomedical images empowers the non-invasive diagnosis and treatments via automated tissue segmentation, tumor detection and cancer staging. The conventional recognition methods normally utilize handcrafted features (such as oriented gradients, curvature, Haar features, Haralick texture features, Laws energy features, etc.) depending on the image modalities and object characteristics. It is challenging to have a general model for object recognition. Superior to handcrafted features, deep neural networks (DNN) can extract self-adaptive features corresponding with specific task, hence can be employed for general object recognition models. These DNN-features are adjusted semantically and cognitively by over tens of millions parameters corresponding to the mechanism of human brain, therefore leads to more accurate and robust results. Motivated by it, in this thesis, we proposed DNN-based energy models to recognize object on multimodality images. For the aim of object recognition, the major contributions of this thesis can be summarized below: 1. We firstly proposed a new comprehensive autoencoder model to recognize the position and shape of prostate from magnetic resonance images. Different from the most autoencoder-based methods, we focused on positive samples to train the model in which the extracted features all come from prostate. After that, an image energy minimization scheme was applied to further improve the recognition accuracy. The proposed model was compared with three classic classifiers (i.e. support vector machine with radial basis function kernel, random forest, and naive Bayes), and demonstrated significant superiority for prostate recognition on magnetic resonance images. We further extended the proposed autoencoder model for saliency object detection on natural images, and the experimental validation proved the accurate and robust saliency object detection results of our model. 2. A general multi-contexts combined deep neural networks (MCDN) model was then proposed for object recognition from natural images and biomedical images. Under one uniform framework, our model was performed in multi-scale manner. Our model was applied for saliency object detection from natural images as well as prostate recognition from magnetic resonance images. Our experimental validation demonstrated that the proposed model was competitive to current state-of-the-art methods. 3. We designed a novel saliency image energy to finely segment salient objects on basis of our MCDN model. The region priors were taken into account in the energy function to avoid trivial errors. Our method outperformed state-of-the-art algorithms on five benchmarking datasets. In the experiments, we also demonstrated that our proposed saliency image energy can boost the results of other conventional saliency detection methods
Contributions of Continuous Max-Flow Theory to Medical Image Processing
Discrete graph cuts and continuous max-flow theory have created a paradigm shift in many areas of medical image processing. As previous methods limited themselves to analytically solvable optimization problems or guaranteed only local optimizability to increasingly complex and non-convex functionals, current methods based now rely on describing an optimization problem in a series of general yet simple functionals with a global, but non-analytic, solution algorithms. This has been increasingly spurred on by the availability of these general-purpose algorithms in an open-source context. Thus, graph-cuts and max-flow have changed every aspect of medical image processing from reconstruction to enhancement to segmentation and registration.
To wax philosophical, continuous max-flow theory in particular has the potential to bring a high degree of mathematical elegance to the field, bridging the conceptual gap between the discrete and continuous domains in which we describe different imaging problems, properties and processes. In Chapter 1, we use the notion of infinitely dense and infinitely densely connected graphs to transfer between the discrete and continuous domains, which has a certain sense of mathematical pedantry to it, but the resulting variational energy equations have a sense of elegance and charm. As any application of the principle of duality, the variational equations have an enigmatic side that can only be decoded with time and patience.
The goal of this thesis is to show the contributions of max-flow theory through image enhancement and segmentation, increasing incorporation of topological considerations and increasing the role played by user knowledge and interactivity. These methods will be rigorously grounded in calculus of variations, guaranteeing fuzzy optimality and providing multiple solution approaches to addressing each individual problem
Vascular Segmentation Algorithms for Generating 3D Atherosclerotic Measurements
Atherosclerosis manifests as plaques within large arteries of the body and remains as a leading cause of mortality and morbidity in the world. Major cardiovascular events may occur in patients without known preexisting symptoms, thus it is important to monitor progression and regression of the plaque burden in the arteries for evaluating patient\u27s response to therapy. In this dissertation, our main focus is quantification of plaque burden from the carotid and femoral arteries, which are major sites for plaque formation, and are straight forward to image noninvasively due to their superficial location. Recently, 3D measurements of plaque burden have shown to be more sensitive to the changes of plaque burden than one-/two-dimensional measurements. However, despite the advancements of 3D noninvasive imaging technology with rapid acquisition capabilities, and the high sensitivity of the 3D plaque measurements of plaque burden, they are still not widely used due to the inordinate amount of time and effort required to delineate artery walls plus plaque boundaries to obtain 3D measurements from the images. Therefore, the objective of this dissertation is developing novel semi-automated segmentation methods to alleviate measurement burden from the observer for segmentation of the outer wall and lumen boundaries from: (1) 3D carotid ultrasound (US) images, (2) 3D carotid black-blood magnetic resonance (MR) images, and (3) 3D femoral black-blood MR images.
Segmentation of the carotid lumen and outer wall from 3DUS images is a challenging task due to low image contrast, for which no method has been previously reported. Initially, we developed a 2D slice-wise segmentation algorithm based on the level set method, which was then extended to 3D. The 3D algorithm required fewer user interactions than manual delineation and the 2D method. The algorithm reduced user time by ≈79% (1.72 vs. 8.3 min) compared to manual segmentation for generating 3D-based measurements with high accuracy (Dice similarity coefficient (DSC)\u3e90%). Secondly, we developed a novel 3D multi-region segmentation algorithm, which simultaneously delineates both the carotid lumen and outer wall surfaces from MR images by evolving two coupled surfaces using a convex max-flow-based technique. The algorithm required user interaction only on a single transverse slice of the 3D image for generating 3D surfaces of the lumen and outer wall. The algorithm was parallelized using graphics processing units (GPU) to increase computational speed, thus reducing user time by 93% (0.78 vs. 12 min) compared to manual segmentation. Moreover, the algorithm yielded high accuracy (DSC \u3e 90%) and high precision (intra-observer CV \u3c 5.6% and inter-observer CV \u3c 6.6%). Finally, we developed and validated an algorithm based on convex max-flow formulation to segment the femoral arteries that enforces a tubular shape prior and an inter-surface consistency of the outer wall and lumen to maintain a minimum separation distance between the two surfaces. The algorithm required the observer to choose only about 11 points on its medial axis of the artery to yield the 3D surfaces of the lumen and outer wall, which reduced the operator time by 97% (1.8 vs. 70-80 min) compared to manual segmentation. Furthermore, the proposed algorithm reported DSC greater than 85% and small intra-observer variability (CV ≈ 6.69%). In conclusion, the development of robust semi-automated algorithms for generating 3D measurements of plaque burden may accelerate translation of 3D measurements to clinical trials and subsequently to clinical care
- …