28,621 research outputs found

    Probabilistic Search for Object Segmentation and Recognition

    Get PDF
    The problem of searching for a model-based scene interpretation is analyzed within a probabilistic framework. Object models are formulated as generative models for range data of the scene. A new statistical criterion, the truncated object probability, is introduced to infer an optimal sequence of object hypotheses to be evaluated for their match to the data. The truncated probability is partly determined by prior knowledge of the objects and partly learned from data. Some experiments on sequence quality and object segmentation and recognition from stereo data are presented. The article recovers classic concepts from object recognition (grouping, geometric hashing, alignment) from the probabilistic perspective and adds insight into the optimal ordering of object hypotheses for evaluation. Moreover, it introduces point-relation densities, a key component of the truncated probability, as statistical models of local surface shape

    Probabilistic Search for Object Segmentation and Recognition

    Full text link
    The problem of searching for a model-based scene interpretation is analyzed within a probabilistic framework. Object models are formulated as generative models for range data of the scene. A new statistical criterion, the truncated object probability, is introduced to infer an optimal sequence of object hypotheses to be evaluated for their match to the data. The truncated probability is partly determined by prior knowledge of the objects and partly learned from data. Some experiments on sequence quality and object segmentation and recognition from stereo data are presented. The article recovers classic concepts from object recognition (grouping, geometric hashing, alignment) from the probabilistic perspective and adds insight into the optimal ordering of object hypotheses for evaluation. Moreover, it introduces point-relation densities, a key component of the truncated probability, as statistical models of local surface shape.Comment: 18 pages, 5 figure

    Deep Learning for Semantic Part Segmentation with High-Level Guidance

    Full text link
    In this work we address the task of segmenting an object into its parts, or semantic part segmentation. We start by adapting a state-of-the-art semantic segmentation system to this task, and show that a combination of a fully-convolutional Deep CNN system coupled with Dense CRF labelling provides excellent results for a broad range of object categories. Still, this approach remains agnostic to high-level constraints between object parts. We introduce such prior information by means of the Restricted Boltzmann Machine, adapted to our task and train our model in an discriminative fashion, as a hidden CRF, demonstrating that prior information can yield additional improvements. We also investigate the performance of our approach ``in the wild'', without information concerning the objects' bounding boxes, using an object detector to guide a multi-scale segmentation scheme. We evaluate the performance of our approach on the Penn-Fudan and LFW datasets for the tasks of pedestrian parsing and face labelling respectively. We show superior performance with respect to competitive methods that have been extensively engineered on these benchmarks, as well as realistic qualitative results on part segmentation, even for occluded or deformable objects. We also provide quantitative and extensive qualitative results on three classes from the PASCAL Parts dataset. Finally, we show that our multi-scale segmentation scheme can boost accuracy, recovering segmentations for finer parts.Comment: 11 pages (including references), 3 figures, 2 table

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    A Stochastic Modeling Approach to Region-and Edge-Based Image Segmentation

    Get PDF
    The purpose of image segmentation is to isolate objects in a scene from the background. This is a very important step in any computer vision system since various tasks, such as shape analysis and object recognition, require accurate image segmentation. Image segmentation can also produce tremendous data reduction. Edge-based and region-based segmentation have been examined and two new algorithms based on recent results in random field theory have been developed. The edge-based segmentation algorithm uses the pixel gray level intensity information to allocate object boundaries in two stages: edge enhancement, followed by edge linking. Edge enhancement is accomplished by maximum energy filters used in one-dimensional bandlimited signal analysis. The issue of optimum filter spatial support is analyzed for ideal edge models. Edge linking is performed by quantitative sequential search using the Stack algorithm. Two probabilistic search metrics are introduced and their optimality is proven and demonstrated on test as well as real scenes. Compared to other methods, this algorithm is shown to produce more accurate allocation of object boundaries. Region-based segmentation was modeled as a MAP estimation problem in which the actual (unknown) objects were estimated from the observed (known) image by a recursive classification algorithms. The observed image was modeled by an Autoregressive (AR) model whose parameters were estimated locally, and a Gibbs-Markov random field (GMRF) model was used to model the unknown scene. A computational study was conducted on images having various types of texture images. The issues of parameter estimation, neighborhood selection, and model orders were examined. It is concluded that the MAP approach for region segmentation generally works well on images having a large content of microtextures which can be properly modeled by both AR and GMRF models. On these texture images, second order AR and GMRF models were shown to be adequate
    corecore