25 research outputs found

    Perceptual Grouping for Contour Extraction

    Get PDF
    This paper describes an algorithm that efficiently groups line segments into perceptually salient contours in complex images. A measure of affinity between pairs of lines is used to guide group formation and limit the branching factor of the contour search procedure. The extracted contours are ranked, and presented as a contour hierarchy. Our algorithm is able to extract salient contours in the presence of texture, clutter, and repetitive or ambiguous image structure. We show experimental results on a complex line-set. 1

    An Ant Colony Algorithm for Roads Extraction in High Resolution SAR Images

    Get PDF
    This paper presents a method for the detection of roads in high resolution Synthetic Aperture Radar (SAR) images using an Ant Colony Algorithm (ACA). Roads in a high resolution SAR image can be modeled as continuously straight line segments of roadsides that possess width. In our method, line segments which represent the candidate positions for roadsides are first extracted from the image using a line segments extractor, and next the roadsides are accurately detected by grouping those line segments. For this purpose, we develop a method based on an ACA. We combine perceptual grouping factors with it and try to reduce its overall computational cost by a region growing method. In this process, a selected initial seed is grown into a finally grouped segment by the iterated ACA process, which considers segments only in a search region. Finally to detect roadsides as smooth curves, we introduce the photometric constraints in ant colony algorithm as external energy in a modified snake model to extract geometric roadsides model. We applied our method to some parts of TerraSAR-x images that have a resolution of about 1 m. The experimental results show that our method can accurately detect roadsides from high resolution SAR images

    Analyse d'images aériennes haute résolution pour l'extraction du bâti

    Get PDF
    Nous présentons dans cet article une méthode pour la détection et la localisation de bâtiments dans un couple d'images aériennes à haute résolution. La méthode s'appuie sur un appariement stéréoscopique multirésolutions et un groupement perceptuel de segments de droite intégrant des informations 3-D

    Segmentation of Structured Objects in Image 1

    Get PDF
    Abstract Detection of foreground structured objects in the images is an essential task in many image processing applications. This paper presents a region merging and region growing approach for automatic detection of the foreground objects in the image. The proposed approach identifies objects in the given image based on general properties of the objects without depending on the prior knowledge about specific objects. The region contrast information is used to separate the regions of the structured objects from the background regions. The perceptual organization laws are used in the region merging process to group the various regions i.e. parts of the object. The system is adaptive to the image content. The results of the experiments show that the proposed scheme can efficiently extract object boundary from the background

    A framework for performance characterization of intermediate-level grouping modules

    Full text link

    A summary of image segmentation techniques

    Get PDF
    Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation

    Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints

    Get PDF
    In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that — in addition to commonly used geometric information — makes use of a novel multi–modal measure between local edge/line features. The grouping information is then used to: 1) disambiguate stereopsis by enforcing that stereo matches preserve groups; and 2) correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints
    corecore