3 research outputs found
Model-based learning of local image features for unsupervised texture segmentation
Features that capture well the textural patterns of a certain class of images
are crucial for the performance of texture segmentation methods. The manual
selection of features or designing new ones can be a tedious task. Therefore,
it is desirable to automatically adapt the features to a certain image or class
of images. Typically, this requires a large set of training images with similar
textures and ground truth segmentation. In this work, we propose a framework to
learn features for texture segmentation when no such training data is
available. The cost function for our learning process is constructed to match a
commonly used segmentation model, the piecewise constant Mumford-Shah model.
This means that the features are learned such that they provide an
approximately piecewise constant feature image with a small jump set. Based on
this idea, we develop a two-stage algorithm which first learns suitable
convolutional features and then performs a segmentation. We note that the
features can be learned from a small set of images, from a single image, or
even from image patches. The proposed method achieves a competitive rank in the
Prague texture segmentation benchmark, and it is effective for segmenting
histological images
Two and three dimensional segmentation of multimodal imagery
The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes