409 research outputs found

    PIMs and invariant parts for shape recognition

    Get PDF
    Journal ArticleWe present completely new very powerful solutions t o two fundamental problems central to computer vision. 1. Given data sets representing C objects to be stored in a database, and given a new data set for an object, determine the object in the database that is most like the object measured. We solve this problem through use of PIMs ("Polynomial Interpolated Measures"), which, is a new representation integrating implicit polynomial curves and surfaces, explicit polynomials, and discrete data sets which may be sparse. The method provides high accuracy at low computational cost. 2. Given noisy 20 data along a curve (or 30 data along a surface), decompose the data into patches such that new data taken along affine transformation-s or Eucladean transformations of the curve (or surface) can be decomposed into corresponding patches. Then recognition of complex or partially occluded objects can be done in terms of invariantly determined patches. We briefly outline a low computational cost image-database indexing-system based on this representation for objects having complex shape-geometry

    Contribution towards a fast stereo dense matching.

    Get PDF
    Stereo matching is important in the area of computer vision as it is the basis of the reconstruction process. Many applications require 3D reconstruction such as view synthesis, robotics... The main task of matching uncalibrated images is to determine the corresponding pixels and other features where the motion between these images and the camera parameters is unknown. Although some methods have been carried out over the past two decades on the matching problem, most of these methods are not practical and difficult to implement. Our approach considers a reliable image edge features in order to develop a fast and practical method. Therefore, we propose a fast stereo matching algorithm combining two different approaches for matching as the image is segmented into two sets of regions: edge regions and non-edge regions. We have used an algebraic method that preserves disparity continuity at the object continuous surfaces. Our results demonstrate that we gain a speed dense matching while the implementation is kept simple and straightforward.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .Z42. Source: Masters Abstracts International, Volume: 44-03, page: 1420. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Topologically faithful fitting of simple closed curves

    Full text link

    State of research in automatic as-built modelling

    Get PDF
    This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.aei.2015.01.001Building Information Models (BIMs) are becoming the official standard in the construction industry for encoding, reusing, and exchanging information about structural assets. Automatically generating such representations for existing assets stirs up the interest of various industrial, academic, and governmental parties, as it is expected to have a high economic impact. The purpose of this paper is to provide a general overview of the as-built modelling process, with focus on the geometric modelling side. Relevant works from the Computer Vision, Geometry Processing, and Civil Engineering communities are presented and compared in terms of their potential to lead to automatic as-built modelling.We acknowledge the support of EPSRC Grant NMZJ/114,DARPA UPSIDE Grant A13–0895-S002, NSF CAREER Grant N. 1054127, European Grant Agreements No. 247586 and 334241. We would also like to thank NSERC Canada, Aecon, and SNC-Lavalin for financially supporting some parts of this research

    Analysis of 3D objects at multiple scales (application to shape matching)

    Get PDF
    Depuis quelques années, l évolution des techniques d acquisition a entraîné une généralisation de l utilisation d objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l analyse dite scale-space est aujourd hui un standard pour l étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d instabilité et nécessite une information de connectivité, qui n est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d outils mathématiques pour l analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l analyse des nuages de points à plusieurs échelles. Un défi intéressant est l analyse d objets 3D acquis dans le cadre de l étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l acquisition des fragments des statues entourant par le passé le Phare d Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l action du temps. Nous proposons un formalisme pour la conception de systèmes d assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets.Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Analysis of 3D objects at multiple scales (application to shape matching)

    Get PDF
    Depuis quelques années, l évolution des techniques d acquisition a entraîné une généralisation de l utilisation d objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l analyse dite scale-space est aujourd hui un standard pour l étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d instabilité et nécessite une information de connectivité, qui n est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d outils mathématiques pour l analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l analyse des nuages de points à plusieurs échelles. Un défi intéressant est l analyse d objets 3D acquis dans le cadre de l étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l acquisition des fragments des statues entourant par le passé le Phare d Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l action du temps. Nous proposons un formalisme pour la conception de systèmes d assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets.Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    From small to large baseline multiview stereo : dealing with blur, clutter and occlusions

    Get PDF
    This thesis addresses the problem of reconstructing the three-dimensional (3D) digital model of a scene from a collection of two-dimensional (2D) images taken from it. To address this fundamental computer vision problem, we propose three algorithms. They are the main contributions of this thesis. First, we solve multiview stereo with the o -axis aperture camera. This system has a very small baseline as images are captured from viewpoints close to each other. The key idea is to change the size or the 3D location of the aperture of the camera so as to extract selected portions of the scene. Our imaging model takes both defocus and stereo information into account and allows to solve shape reconstruction and image restoration in one go. The o -axis aperture camera can be used in a small-scale space where the camera motion is constrained by the surrounding environment, such as in 3D endoscopy. Second, to solve multiview stereo with large baseline, we present a framework that poses the problem of recovering a 3D surface in the scene as a regularized minimal partition problem of a visibility function. The formulation is convex and hence guarantees that the solution converges to the global minimum. Our formulation is robust to view-varying extensive occlusions, clutter and image noise. At any stage during the estimation process the method does not rely on the visual hull, 2D silhouettes, approximate depth maps, or knowing which views are dependent(i.e., overlapping) and which are independent( i.e., non overlapping). Furthermore, the degenerate solution, the null surface, is not included as a global solution in this formulation. One limitation of this algorithm is that its computation complexity grows with the number of views that we combine simultaneously. To address this limitation, we propose a third formulation. In this formulation, the visibility functions are integrated within a narrow band around the estimated surface by setting weights to each point along optical rays. This thesis presents technical descriptions for each algorithm and detailed analyses to show how these algorithms improve existing reconstruction techniques
    • …
    corecore