38,483 research outputs found
Color Image Segmentation Using Generalized Inverted Finite Mixture Models By Integrating Spatial Information
In computer vision, image segmentation plays foundational role. Innumerable techniques, such as active contour, graph-cut-based, model-based, machine learning, and clustering-based methods have been proposed for tackling the image segmentation problem. But, none of them is universally applicable. Thus, the hunt for optimized and robust models for image segmentation is still under-process and also an open question. The challenges faced in image segmentation are the integration of spatial information, finding the exact number of clusters (M), and to segment the image smoothly without any inaccuracy specially in the presence of noise, a complex background, low contrast and, inhomogeneous intensity. The use of finite mixture model (FMMs) for image segmentation is very popular approach in the field of computer vision. The application of image segmentation using FMM ranges from automatic number plate recognition, content-based image retrieval, texture recognition, facial recognition, satellite imagery etc. Image segmentation using FMM undergoes some problems. FMM-based image segmentation considers neither spatial correlation among the peer pixels nor the prior knowledge that the adjacent pixels are most likely belong to the same cluster. Also, color images are sensitive to illumination and noise. To overcome these limitations, we have used three different methods for integrating spatial information with FMM. First method uses the prior knowledge of M. In second method, we have used Markov Random Field (MRF). Lastly, in third, we have used weighted geometric and arithmetic mean template. We have implemented these methods with inverted Dirichlet mixture model (IDMM), generalized inverted Dirichlet mixture model (GIDMM) and inverted Beta Liouville mixture model (IBLMM). For experimentation, the Berkeley 500 (BSD500) and MIT's Computational Visual Cognition Laboratory (CVCL) datasets are employed. Furthermore, to compare the image segmentation results, the outputs of IDMM, GIDMM, and IBLMM are compared with each other, using segmentation performance evaluation metrics
Pilot investigation of remote sensing for intertidal oyster mapping in coastal South Carolina: a methods comparison
South Carolina’s oyster reefs are a major component of the coastal landscape. Eastern oysters Crassostrea virginica are an important economic resource to the state and serve many essential functions in the environment, including water filtration, creek bank stabilization and habitat for
other plants and animals. Effective conservation and management of oyster reefs is dependent on an understanding of their abundance, distribution, condition, and change over time. In South Carolina, over 95% of the state’s oyster habitat is intertidal. The current intertidal oyster reef database for South Carolina was developed by field assessment over several years. This database was completed in the early 1980s and is in need of an update to assess resource/habitat status and trends across the state. Anthropogenic factors such as coastal development and
associated waterway usage (e.g., boat wakes) are suspected of significantly altering the extent and health of the state’s oyster resources.
In 2002 the NOAA Coastal Services Center’s (Center) Coastal Remote Sensing Program (CRS) worked with the Marine Resources Division of the South Carolina Department of Natural Resources (SCDNR) to develop methods for mapping intertidal oyster reefs along the South Carolina coast using remote sensing technology. The objective of this project was to provide SCDNR with potential methodologies and approaches for assessing oyster resources in a more
efficiently than could be accomplished through field digitizing. The project focused on the utility of high-resolution aerial imagery and on documenting the effectiveness of various analysis techniques for accomplishing the update. (PDF contains 32 pages
Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies
In motion analysis and understanding it is important to be able to fit a
suitable model or structure to the temporal series of observed data, in order
to describe motion patterns in a compact way, and to discriminate between them.
In an unsupervised context, i.e., no prior model of the moving object(s) is
available, such a structure has to be learned from the data in a bottom-up
fashion. In recent times, volumetric approaches in which the motion is captured
from a number of cameras and a voxel-set representation of the body is built
from the camera views, have gained ground due to attractive features such as
inherent view-invariance and robustness to occlusions. Automatic, unsupervised
segmentation of moving bodies along entire sequences, in a temporally-coherent
and robust way, has the potential to provide a means of constructing a
bottom-up model of the moving body, and track motion cues that may be later
exploited for motion classification. Spectral methods such as locally linear
embedding (LLE) can be useful in this context, as they preserve "protrusions",
i.e., high-curvature regions of the 3D volume, of articulated shapes, while
improving their separation in a lower dimensional space, making them in this
way easier to cluster. In this paper we therefore propose a spectral approach
to unsupervised and temporally-coherent body-protrusion segmentation along time
sequences. Volumetric shapes are clustered in an embedding space, clusters are
propagated in time to ensure coherence, and merged or split to accommodate
changes in the body's topology. Experiments on both synthetic and real
sequences of dense voxel-set data are shown. This supports the ability of the
proposed method to cluster body-parts consistently over time in a totally
unsupervised fashion, its robustness to sampling density and shape quality, and
its potential for bottom-up model constructionComment: 31 pages, 26 figure
- …