102 research outputs found
Chan-Vese Reformulation for Selective Image Segmentation.
Selective segmentation involves incorporating user input to partition an image into foreground and background, by discriminating between objects of a similar type. Typically, such methods involve introducing additional constraints to generic segmentation approaches. However, we show that this is often inconsistent with respect to common assumptions about the image. The proposed method introduces a new fitting term that is more useful in practice than the Chan-Vese framework. In particular, the idea is to define a term that allows for the background to consist of multiple regions of inhomogeneity. We provide comparative experimental results to alternative approaches to demonstrate the advantages of the proposed method, broadening the possible application of these methods
A discrete graph Laplacian for signal processing
In this thesis we exploit diffusion processes on graphs to effect two fundamental problems of image processing: denoising and segmentation. We treat these two low-level vision problems on the pixel-wise level under a unified framework: a graph embedding. Using this framework opens us up to the possibilities of exploiting recently introduced algorithms from the semi-supervised machine learning literature.
We contribute two novel edge-preserving smoothing algorithms to the literature. Furthermore we apply these edge-preserving smoothing algorithms to some computational photography tasks. Many recent computational photography tasks require the decomposition of an image into a smooth base layer containing large scale intensity variations and a residual layer capturing fine details. Edge-preserving smoothing is the main computational mechanism in producing these multi-scale image representations. We, in effect, introduce a new approach to edge-preserving multi-scale image decompositions. Where as prior approaches such as the Bilateral filter and weighted-least squares methods require multiple parameters to tune the response of the filters our method only requires one. This parameter can be interpreted as a scale parameter. We demonstrate the utility of our approach by applying the method to computational photography tasks that utilise multi-scale image decompositions.
With minimal modification to these edge-preserving smoothing algorithms we show that we can extend them to produce interactive image segmentation. As a result the operations of segmentation and denoising are conducted under a unified framework. Moreover we discuss how our method is related to region based active contours. We benchmark our proposed interactive segmentation algorithms against those based upon energy-minimisation, specifically graph-cut methods. We demonstrate that we achieve competitive performance
Statistical Shape Modelling and Segmentation of the Respiratory Airway
The human respiratory airway consists of the upper (nasal cavity, pharynx) and the lower (trachea, bronchi) respiratory tracts. Accurate segmentation of these two airway tracts can lead to better diagnosis and interpretation of airway-specific diseases, and lead to improvement in the localization of abnormal metabolic or pathological sites found within and/or surrounding the respiratory regions. Due to the complexity and the variability displayed in the anatomical structure of the upper respiratory airway along with the challenges in distinguishing the nasal cavity from non-respiratory regions such as the paranasal sinuses, it is difficult for existing algorithms to accurately segment the upper airway without manual intervention. This thesis presents an implicit non-parametric framework for constructing a statistical shape model (SSM) of the upper and lower respiratory tract, capable of distinct shape generation and be adapted for segmentation. An SSM of the nasal cavity was successfully constructed using 50 nasal CT scans. The performance of the SSM was evaluated for compactness, specificity and generality. An averaged distance error of 1.47 mm was measured for the generality assessment. The constructed SSM was further adapted with a modified locally constrained random walk algorithm to segment the nasal cavity. The proposed algorithm was evaluated on 30 CT images and outperformed comparative state-of-the-art and conventional algorithms. For the lower airway, a separate algorithm was proposed to automatically segment the trachea and bronchi, and was designed to tolerate the image characteristics inherent in low-contrast CT images. The algorithm was evaluated on 20 clinical low-contrast CT from PET-CT patient studies and demonstrated better performance (87.1±2.8 DSC and distance error of 0.37±0.08 mm) in segmentation results against comparative state-of-the-art algorithms
RECURSIVE PATH FOLLOWING IN LOG POLAR SPACE FOR AUTONOMOUS LEAF CONTOUR EXTRACTION
Use of image segmentation has caused agriculture advancement in species identification, chlorophyll measurements, plant growth and disease detection. Most methods require some level of manual segmentation as autonomous image segmentation is a difficult task. Methods with the highest segmentation precision use a priori knowledge obtained from user input which is time consuming and subjective. This research focuses on providing current segmentation methods a pre-processing model that autonomously extracts an internal and external contour of the leaf. The model converts the uniform Cartesian images to non-uniformly sampled images in log polar space. A recursive path following algorithm was designed to map out the leafâs edge boundary. This boundary is shifted inward and outward to create two contours; one that lies within the foreground and one within the background. The image database consists of 918 leaves from multiple plants and different background mediums. The model successfully created contours for 714 of the leaves. Results of the autonomously created contours being used in lieu of user-input contours for a current segmentation algorithm are presented
Graph Spectral Image Processing
Recent advent of graph signal processing (GSP) has spurred intensive studies
of signals that live naturally on irregular data kernels described by graphs
(e.g., social networks, wireless sensor networks). Though a digital image
contains pixels that reside on a regularly sampled 2D grid, if one can design
an appropriate underlying graph connecting pixels with weights that reflect the
image structure, then one can interpret the image (or image patch) as a signal
on a graph, and apply GSP tools for processing and analysis of the signal in
graph spectral domain. In this article, we overview recent graph spectral
techniques in GSP specifically for image / video processing. The topics covered
include image compression, image restoration, image filtering and image
segmentation
Scribble2Label: Self-labeling via Consistency for Scribble-supervised Cell Segmentation
Department of Computer Science and EngineeringCell segmentation gives important findings in medical image analysis. Through cell analysis, various tasks such as cancer diagnosis, reconstruction of synaptic connectivity maps, measurement of drug response and so on could be possible.
With the advent of recent advances in deep learning, more accurate and high-throughput cell segmentation has become feasible. However, deep learning-based cell segmentation faces a problem of cost and scalability for constructing dataset. Supervised-learning methods require fully annotated ground-truth labels, where there are as many as hundreds of cells. Consequently, it needs time-consuming and labor-intensive works.
In this thesis, Scribble2Label, a novel weakly-supervised cell segmentation framework that exploits only a handful of scribble annotations without full segmentation labels. The core idea is to combine pseudo-labeling and label filtering to generate reliable labels from weak supervision. For this, we leverage the consistency of predictions by iteratively averaging the predictions to improve pseudo labels.
The performance of Scribble2Label is demonstrated by comparing it to several state-of-the-art cell segmentation methods with various cell image modalities, including bright-field, fluorescence, and electron microscopy. Our method achieves outperformed results compared with previous related works from various data including fluorescence, histopathology, Bright-field and electron microscopy(EM). Furthermore, the prop method consistently works well in different scribble instance levels.ope
Recommended from our members
Development of computer-based algorithms for unsupervised assessment of radiotherapy contouring
INTRODUCTION: Despite the advances in radiotherapy treatment delivery, target volume
delineation remains one of the greatest sources of error in the radiotherapy delivery process,
which can lead to poor tumour control probability and impact clinical outcome. Contouring
assessments are performed to ensure high quality of target volume definition in clinical trials
but this can be subjective and labour-intensive.
This project addresses the hypothesis that computational segmentation techniques, with a given
prior, can be used to develop an image-based tumour delineation process for contour
assessments. This thesis focuses on the exploration of the segmentation techniques to develop
an automated method for generating reference delineations in the setting of advanced lung
cancer. The novelty of this project is in the use of the initial clinician outline as a prior for
image segmentation.
METHODS: Automated segmentation processes were developed for stage II and III non-small
cell lung cancer using the IDEAL-CRT clinical trial dataset. Marker-controlled watershed
segmentation, two active contour approaches (edge- and region-based) and graph-cut applied
on superpixels were explored. k-nearest neighbour (k-NN) classification of tumour from
normal tissues based on texture features was also investigated.
RESULTS: 63 cases were used for development and training. Segmentation and classification
performance were evaluated on an independent test set of 16 cases. Edge-based active contour
segmentation achieved highest Dice similarity coefficient of 0.80 ± 0.06, followed by graphcut
at 0.76 ± 0.06, watershed at 0.72 ± 0.08 and region-based active contour at 0.71 ± 0.07,
with mean computational times of 192 ± 102 sec, 834 ± 438 sec, 21 ± 5 sec and 45 ± 18 sec
per case respectively. Errors in accuracy of irregularly shaped lesions and segmentation
leakages at the mediastinum were observed.
In the distinction of tumour and non-tumour regions, misclassification errors of 14.5% and
15.5% were achieved using 16- and 8-pixel regions of interest (ROIs) respectively. Higher
misclassification errors of 24.7% and 26.9% for 16- and 8-pixel ROIs were obtained in the
analysis of the tumour boundary.
CONCLUSIONS: Conventional image-based segmentation techniques with the application of
priors are useful in automatic segmentation of tumours, although further developments are
required to improve their performance. Texture classification can be useful in distinguishing
tumour from non-tumour tissue, but the segmentation task at the tumour boundary is more
difficult. Future work with deep-learning segmentation approaches need to be explored.Funded by National Radiotherapy Trials Quality Assurance (RTTQA) grou
Segmentation of ultrasound images of thyroid nodule for assisting fine needle aspiration cytology
The incidence of thyroid nodule is very high and generally increases with the
age. Thyroid nodule may presage the emergence of thyroid cancer. The thyroid
nodule can be completely cured if detected early. Fine needle aspiration
cytology is a recognized early diagnosis method of thyroid nodule. There are
still some limitations in the fine needle aspiration cytology, and the
ultrasound diagnosis of thyroid nodule has become the first choice for
auxiliary examination of thyroid nodular disease. If we could combine medical
imaging technology and fine needle aspiration cytology, the diagnostic rate of
thyroid nodule would be improved significantly. The properties of ultrasound
will degrade the image quality, which makes it difficult to recognize the edges
for physicians. Image segmentation technique based on graph theory has become a
research hotspot at present. Normalized cut (Ncut) is a representative one,
which is suitable for segmentation of feature parts of medical image. However,
how to solve the normalized cut has become a problem, which needs large memory
capacity and heavy calculation of weight matrix. It always generates over
segmentation or less segmentation which leads to inaccurate in the
segmentation. The speckle noise in B ultrasound image of thyroid tumor makes
the quality of the image deteriorate. In the light of this characteristic, we
combine the anisotropic diffusion model with the normalized cut in this paper.
After the enhancement of anisotropic diffusion model, it removes the noise in
the B ultrasound image while preserves the important edges and local details.
This reduces the amount of computation in constructing the weight matrix of the
improved normalized cut and improves the accuracy of the final segmentation
results. The feasibility of the method is proved by the experimental results.Comment: 15pages,13figure
Selective Image Segmentation Models and Fast Multigrid Methods
This thesis is concerned with developing robust and accurate variational selective image segmentation models along with fast multigrid methods to solve non-linear partial differential equations (PDEs). The first two major contributions are the development of new distance terms and new intensity fitting terms for selective image segmentation models. These give state-of-the-art segmentation results, with high robustness to the main parameters and to the user input. Therefore, these models are highly applicable to real-world applications such as segmenting single organs from medical scans. The final major contribution is to develop new novel non-standard smoothers for the non-linear full approximation scheme multigrid framework. Multigrid is an optimal O(N) iterative scheme when it converges. However, typically if we directly apply a multigrid solver to a non-linear problem, it will not converge. This is principally due to the ineffectiveness of the standard smoothing schemes such as Jacobi or Gauss-Seidel. We review the true reason that these smoothers are ineffective using local Fourier analysis and develop a smoother which is guaranteed to be effective. Experiments show that the smoother is effective and the algorithm converges as desired. These new non-standard smoothing schemes can be used to solve a whole class of non-linear PDEs quickly. This work also lays the groundwork in the development of a âblack-boxâ non-linear multigrid solver which doesnât require the degree of tuning that current multigrid algorithms do
- âŠ