7,912 research outputs found

    Unsupervised morphological segmentation for images

    Get PDF
    This paper deals with a morphological approach to unsupervised image segmentation. The proposed technique relies on a multiscale Top-Down approach allowing a hierarchical processing of the data ranging from the most global scale to the most detailed one. At each scale, the algorithm consists of four steps: image simplification, feature extraction, contour localization and quality estimation. The main emphasis of this paper is to discuss the selection of a simplification filter for segmentation. Morphological filters based on reconstruction proved to be very efficient for this purpose. The resulting unsupervised algorithm is very robust and can deal with very different type of images.Peer ReviewedPostprint (published version

    Stochastic Multiscale Segmentation Constrained by Image Content

    No full text
    International audienceWe introduce a noise-tolerant segmentation algorithm efficient on 3D multiscale granular materials. The approach uses a graph-based version of the stochastic watershed and relies on the morphological granulometry of the image to achieve a content-driven unsupervised segmentation. We present results on both a virtual material and a real X-ray microtomographic image of solid propellant

    Unsupervised Morphological Multiscale Segmentation of Scanning Electron Microscopy Images

    No full text
    This paper deals with a problem of unsupervised multiscale segmentation in the domain of scanning electron microscopy, which is tackled by mathematical morphology techniques. The proposed approach includes various steps. First, the image is decomposed into various compact scales of representation, where objects at each scale are homogeneous in size. Multiscale decomposition is based on a morphological scale-space followed by scale merging using hierarchical clustering and earth mover distance. Then the compact scales are segmented independently using watershed transform. Finally the segmented scales are combined using a tree of objects in order to obtain a multiscale segmentation

    A robust nonlinear scale space change detection approach for SAR images

    Get PDF
    In this paper, we propose a change detection approach based on nonlinear scale space analysis of change images for robust detection of various changes incurred by natural phenomena and/or human activities in Synthetic Aperture Radar (SAR) images using Maximally Stable Extremal Regions (MSERs). To achieve this, a variant of the log-ratio image of multitemporal images is calculated which is followed by Feature Preserving Despeckling (FPD) to generate nonlinear scale space images exhibiting different trade-offs in terms of speckle reduction and shape detail preservation. MSERs of each scale space image are found and then combined through a decision level fusion strategy, namely "selective scale fusion" (SSF), where contrast and boundary curvature of each MSER are considered. The performance of the proposed method is evaluated using real multitemporal high resolution TerraSAR-X images and synthetically generated multitemporal images composed of shapes with several orientations, sizes, and backscatter amplitude levels representing a variety of possible signatures of change. One of the main outcomes of this approach is that different objects having different sizes and levels of contrast with their surroundings appear as stable regions at different scale space images thus the fusion of results from scale space images yields a good overall performance

    Learned versus Hand-Designed Feature Representations for 3d Agglomeration

    Full text link
    For image recognition and labeling tasks, recent results suggest that machine learning methods that rely on manually specified feature representations may be outperformed by methods that automatically derive feature representations based on the data. Yet for problems that involve analysis of 3d objects, such as mesh segmentation, shape retrieval, or neuron fragment agglomeration, there remains a strong reliance on hand-designed feature descriptors. In this paper, we evaluate a large set of hand-designed 3d feature descriptors alongside features learned from the raw data using both end-to-end and unsupervised learning techniques, in the context of agglomeration of 3d neuron fragments. By combining unsupervised learning techniques with a novel dynamic pooling scheme, we show how pure learning-based methods are for the first time competitive with hand-designed 3d shape descriptors. We investigate data augmentation strategies for dramatically increasing the size of the training set, and show how combining both learned and hand-designed features leads to the highest accuracy
    corecore