1,091 research outputs found
Active Contours and Image Segmentation: The Current State Of the Art
Image segmentation is a fundamental task in image analysis responsible for partitioning an image into multiple sub-regions based on a desired feature. Active contours have been widely used as attractive image segmentation methods because they always produce sub-regions with continuous boundaries, while the kernel-based edge detection methods, e.g. Sobel edge detectors, often produce discontinuous boundaries. The use of level set theory has provided more flexibility and convenience in the implementation of active contours. However, traditional edge-based active contour models have been applicable to only relatively simple images whose sub-regions are uniform without internal edges. Here in this paper we attempt to brief the taxonomy and current state of the art in Image segmentation and usage of Active Contours
Recommended from our members
Statistical Region Based Segmentation of Ultrasound Images
Segmentation of ultrasound images is a challenging problem due to speckle, which
corrupts the image and can result in weak or missing image boundaries, poor signal to
noise ratio, and diminished contrast resolution. Speckle is a random interference pattern
that is characterized by an asymmetric distribution as well as significant spatial correla-
tion. These attributes of speckle are challenging to model in a segmentation approach, so
many previous ultrasound segmentation methods simplify the problem by assuming that
the speckle is white and/or Gaussian distributed. Unlike these methods, in this paper
we present an ultrasound-specific segmentation approach that addresses both the spatial
correlation of the data as well as its intensity distribution. We first decorrelate the image
and then apply a region-based active contour whose motion is derived from an appropri-
ate parametric distribution for maximum likelihood image segmentation. We consider
zero-mean complex Gaussian, Rayleigh, and Fisher-Tippett flows, which are designed
to model fully formed speckle in the in-phase/quadrature (IQ), envelope detected, and
display (log compressed) images, respectively. We present experimental results demon-
strating the effectiveness of our method, and compare the results to other parametric
and non-parametric active contours
An Automatic Level Set Based Liver Segmentation from MRI Data Sets
A fast and accurate liver segmentation method is a challenging work in medical image analysis area. Liver segmentation is an important process for computer-assisted diagnosis, pre-evaluation of liver transplantation and therapy planning of liver tumors. There are several advantages of magnetic resonance imaging such as free form ionizing radiation and good contrast visualization of soft tissue. Also, innovations in recent technology and image acquisition techniques have made magnetic resonance imaging a major tool in modern medicine. However, the use of magnetic resonance images for liver segmentation has been slow when we compare applications with the central nervous systems and musculoskeletal. The reasons are irregular shape, size and position of the liver, contrast agent effects and similarities of the gray values of neighbor organs. Therefore, in this study, we present a fully automatic liver segmentation method by using an approximation of the level set based contour evolution from T2 weighted magnetic resonance data sets. The method avoids solving partial differential equations and applies only integer operations with a two-cycle segmentation algorithm. The efficiency of the proposed approach is achieved by applying the algorithm to all slices with a constant number of iteration and performing the contour evolution without any user defined initial contour. The obtained results are evaluated with four different similarity measures and they show that the automatic segmentation approach gives successful results
Recommended from our members
Brain MRI Segmentation with Multiphase Minimal Partitioning: A Comparative Study
This paper presents the implementation and quantitative evaluation
of a multiphase three-dimensional deformable model in a level set
framework for automated segmentation of brain MRIs. The
segmentation algorithm performs an optimal partitioning of
three-dimensional data based on homogeneity measures that
naturally evolves to the extraction of different tissue types in
the brain. Random seed initialization was used to minimize the
sensitivity of the method to initial conditions while avoiding the
need for a priori information. This random initialization
ensures robustness of the method with respect to the
initialization and the minimization set up. Postprocessing
corrections with morphological operators were applied to refine
the details of the global segmentation method. A clinical study
was performed on a database of 10 adult brain MRI volumes to
compare the level set segmentation to three other methods:
“idealized” intensity thresholding, fuzzy connectedness, and an
expectation maximization classification using hidden Markov random
fields. Quantitative evaluation of segmentation accuracy was
performed with comparison to manual segmentation computing true
positive and false positive volume fractions. A statistical
comparison of the segmentation methods was performed through a
Wilcoxon analysis of these error rates and results showed very
high quality and stability of the multiphase three-dimensional
level set method
Preprocessing Solar Images while Preserving their Latent Structure
Telescopes such as the Atmospheric Imaging Assembly aboard the Solar Dynamics
Observatory, a NASA satellite, collect massive streams of high resolution
images of the Sun through multiple wavelength filters. Reconstructing
pixel-by-pixel thermal properties based on these images can be framed as an
ill-posed inverse problem with Poisson noise, but this reconstruction is
computationally expensive and there is disagreement among researchers about
what regularization or prior assumptions are most appropriate. This article
presents an image segmentation framework for preprocessing such images in order
to reduce the data volume while preserving as much thermal information as
possible for later downstream analyses. The resulting segmented images reflect
thermal properties but do not depend on solving the ill-posed inverse problem.
This allows users to avoid the Poisson inverse problem altogether or to tackle
it on each of 10 segments rather than on each of 10 pixels,
reducing computing time by a factor of 10. We employ a parametric
class of dissimilarities that can be expressed as cosine dissimilarity
functions or Hellinger distances between nonlinearly transformed vectors of
multi-passband observations in each pixel. We develop a decision theoretic
framework for choosing the dissimilarity that minimizes the expected loss that
arises when estimating identifiable thermal properties based on segmented
images rather than on a pixel-by-pixel basis. We also examine the efficacy of
different dissimilarities for recovering clusters in the underlying thermal
properties. The expected losses are computed under scientifically motivated
prior distributions. Two simulation studies guide our choices of dissimilarity
function. We illustrate our method by segmenting images of a coronal hole
observed on 26 February 2015
Comparative Evaluation of Electrical Resistance Tomography, Positron Emission Particle Tracking and High-Speed Imaging for Analysing Horizontal Particle-Liquid Flow in a Pipe
We evaluate three experimental techniques - electrical resistance tomography (ERT), positron emission particle tracking (PEPT) and high-speed imaging (HSI) – for analysing the local particle velocity field and spatial distribution in a horizontal particle-liquid pipe flow under varying conditions of solid concentration. A new ERT methodology is devised for estimating particle velocity, circumventing the limitations of the conventional cross-correlation technique. Furthermore, an enhanced HSI approach is introduced and systematically compared with PEPT and ERT. Results show that, under all conditions, PEPT provides the most accurate particle velocity field followed by HSI, whilst ERT yields the most accurate concentration field, followed by HSI. The enhanced HSI emerges as a simple cost-effective option compared to PEPT and ERT. A combined measurement approach using PEPT for local particle velocity and ERT for local concentration, however, delivers the best comprehensive two-phase flow characterisation, highlighting potential synergies between these methods for complex flow studies
- …