7 research outputs found

    Understanding Real World Indoor Scenes With Synthetic Data

    Get PDF
    Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset

    Bubble tag identification using an invariant-under-perspective signature

    No full text
    We have at our disposal a large database containing images of various configurations of coplanar circles, randomly laid-out, called "Bubble Tags". The images are taken from different viewpoints. Given a new image (query image), the goal is to find in the database the image containing the same bubble tag as the query image. We propose representing the images through projective invariant signatures which allow identifying the bubble tag without passing through an Euclidean reconstruction step. This is justified by the size of the database, which imposes the use of queries in 1D/vectorial form, i.e. not in 2D/matrix form. The experiments carried out confirm the efficiency of our approach, in terms of precision and complexity. © 2010 IEEE

    A parameterless line segment and elliptical arc detector with enhanced ellipse fitting

    No full text
    We propose a combined line segment and elliptical arc detector, which formally guarantees the control of the number of false positives and requires no parameter tuning. The accuracy of the detected elliptical features is improved by using a novel non-iterative ellipse fitting technique, which merges the algebraic distance with the gradient orientation. The performance of the detector is evaluated on computer-generated images and on natural images. © 2012 Springer-Verlag

    Detection of Walls, Floors, and Ceilings in Point Cloud Data

    No full text
    The successful implementation of Building Information Models (BIMs) for facility management, maintenance, and operation is highly dependent on the ability to generate such models for existing assets. Generating such BIMs typically requires laser scanning to acquire point clouds and significant post-processing to register the clouds, replace the points with BIM objects, assign semantic relationships and add any additional properties, such as materials. Several research efforts have attempted to reduce the post-processing manual effort by classifying the structural elements and clutter in isolated rooms. They have not however examined the complexity of a whole building. In this paper, we propose a robust framework that can automatically process the point cloud of an entire building, possibly with multiple floors, and classify the points belonging to floors, walls and ceilings. We first extract the planar surfaces by segmenting the point cloud, and then we use contextual reasoning, such as height, orientation, relation to other objects, and local statistics like point density in order to classify them into objects. Experiments were conducted on a registered point cloud of an office building. The results indicated that almost all of the walls and floors/ceilings were correctly clustered in the point cloud

    Shape matching via quotient spaces

    No full text
    We introduce a novel method for non-rigid shape matching, designed to address the symmetric ambiguity problem present when matching shapes with intrinsic symmetries. Unlike the majority of existing methods which try to overcome this ambiguity by sampling a set of landmark correspondences, we address this problem directly by performing shape matching in an appropriate quotient space, where the symmetry has been identified and factored out. This allows us to both simplify the shape matching problem by matching between subspaces, and to return multiple solutions with equally good dense correspondences. Remarkably, both symmetry detection and shape matching are done without establishing any landmark correspondences between either points or parts of the shapes. This allows us to avoid an expensive combinatorial search present in most intrinsic symmetry detection and shape matching methods. We compare our technique with state-of-the-art methods and show that superior performance can be achieved both when the symmetry on each shape is known and when it needs to be estimated. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd

    Understanding Real World Indoor Scenes With Synthetic Data

    No full text
    Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset
    corecore