825 research outputs found

    In Perfect Shape: Certifiably Optimal 3D Shape Reconstruction from 2D Landmarks

    Full text link
    We study the problem of 3D shape reconstruction from 2D landmarks extracted in a single image. We adopt the 3D deformable shape model and formulate the reconstruction as a joint optimization of the camera pose and the linear shape parameters. Our first contribution is to apply Lasserre's hierarchy of convex Sums-of-Squares (SOS) relaxations to solve the shape reconstruction problem and show that the SOS relaxation of minimum order 2 empirically solves the original non-convex problem exactly. Our second contribution is to exploit the structure of the polynomial in the objective function and find a reduced set of basis monomials for the SOS relaxation that significantly decreases the size of the resulting semidefinite program (SDP) without compromising its accuracy. These two contributions, to the best of our knowledge, lead to the first certifiably optimal solver for 3D shape reconstruction, that we name Shape*. Our third contribution is to add an outlier rejection layer to Shape* using a truncated least squares (TLS) robust cost function and leveraging graduated non-convexity to solve TLS without initialization. The result is a robust reconstruction algorithm, named Shape#, that tolerates a large amount of outlier measurements. We evaluate the performance of Shape* and Shape# in both simulated and real experiments, showing that Shape* outperforms local optimization and previous convex relaxation techniques, while Shape# achieves state-of-the-art performance and is robust against 70% outliers in the FG3DCar dataset.Comment: Camera-ready, CVPR 2020. 18 pages, 5 figures, 1 tabl

    Large Area 3-D Reconstructions from Underwater Optical Surveys

    Full text link
    Robotic underwater vehicles are regularly performing vast optical surveys of the ocean floor. Scientists value these surveys since optical images offer high levels of detail and are easily interpreted by humans. Unfortunately, the coverage of a single image is limited by absorption and backscatter while what is generally desired is an overall view of the survey area. Recent works on underwater mosaics assume planar scenes and are applicable only to situations without much relief. We present a complete and validated system for processing optical images acquired from an underwater robotic vehicle to form a 3D reconstruction of the ocean floor. Our approach is designed for the most general conditions of wide-baseline imagery (low overlap and presence of significant 3D structure) and scales to hundreds or thousands of images. We only assume a calibrated camera system and a vehicle with uncertain and possibly drifting pose information (e.g., a compass, depth sensor, and a Doppler velocity log). Our approach is based on a combination of techniques from computer vision, photogrammetry, and robotics. We use a local to global approach to structure from motion, aided by the navigation sensors on the vehicle to generate 3D sub-maps. These sub-maps are then placed in a common reference frame that is refined by matching overlapping sub-maps. The final stage of processing is a bundle adjustment that provides the 3D structure, camera poses, and uncertainty estimates in a consistent reference frame. We present results with ground truth for structure as well as results from an oceanographic survey over a coral reef.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86036/1/opizarro-12.pd

    A Survey of Surface Reconstruction from Point Clouds

    Get PDF
    International audienceThe area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations – not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques, and provide directions for future work in surface reconstruction

    Traffic Scene Perception for Automated Driving with Top-View Grid Maps

    Get PDF
    Ein automatisiertes Fahrzeug muss sichere, sinnvolle und schnelle Entscheidungen auf Basis seiner Umgebung treffen. Dies benötigt ein genaues und recheneffizientes Modell der Verkehrsumgebung. Mit diesem Umfeldmodell sollen Messungen verschiedener Sensoren fusioniert, gefiltert und nachfolgenden Teilsysteme als kompakte, aber aussagekräftige Information bereitgestellt werden. Diese Arbeit befasst sich mit der Modellierung der Verkehrsszene auf Basis von Top-View Grid Maps. Im Vergleich zu anderen Umfeldmodellen ermöglichen sie eine frühe Fusion von Distanzmessungen aus verschiedenen Quellen mit geringem Rechenaufwand sowie eine explizite Modellierung von Freiraum. Nach der Vorstellung eines Verfahrens zur Bodenoberflächenschätzung, das die Grundlage der Top-View Modellierung darstellt, werden Methoden zur Belegungs- und Elevationskartierung für Grid Maps auf Basis von mehreren, verrauschten, teilweise widersprüchlichen oder fehlenden Distanzmessungen behandelt. Auf der resultierenden, sensorunabhängigen Repräsentation werden anschließend Modelle zur Detektion von Verkehrsteilnehmern sowie zur Schätzung von Szenenfluss, Odometrie und Tracking-Merkmalen untersucht. Untersuchungen auf öffentlich verfügbaren Datensätzen und einem Realfahrzeug zeigen, dass Top-View Grid Maps durch on-board LiDAR Sensorik geschätzt und verlässlich sicherheitskritische Umgebungsinformationen wie Beobachtbarkeit und Befahrbarkeit abgeleitet werden können. Schließlich werden Verkehrsteilnehmer als orientierte Bounding Boxen mit semantischen Klassen, Geschwindigkeiten und Tracking-Merkmalen aus einem gemeinsamen Modell zur Objektdetektion und Flussschätzung auf Basis der Top-View Grid Maps bestimmt

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Multimodal Three Dimensional Scene Reconstruction, The Gaussian Fields Framework

    Get PDF
    The focus of this research is on building 3D representations of real world scenes and objects using different imaging sensors. Primarily range acquisition devices (such as laser scanners and stereo systems) that allow the recovery of 3D geometry, and multi-spectral image sequences including visual and thermal IR images that provide additional scene characteristics. The crucial technical challenge that we addressed is the automatic point-sets registration task. In this context our main contribution is the development of an optimization-based method at the core of which lies a unified criterion that solves simultaneously for the dense point correspondence and transformation recovery problems. The new criterion has a straightforward expression in terms of the datasets and the alignment parameters and was used primarily for 3D rigid registration of point-sets. However it proved also useful for feature-based multimodal image alignment. We derived our method from simple Boolean matching principles by approximation and relaxation. One of the main advantages of the proposed approach, as compared to the widely used class of Iterative Closest Point (ICP) algorithms, is convexity in the neighborhood of the registration parameters and continuous differentiability, allowing for the use of standard gradient-based optimization techniques. Physically the criterion is interpreted in terms of a Gaussian Force Field exerted by one point-set on the other. Such formulation proved useful for controlling and increasing the region of convergence, and hence allowing for more autonomy in correspondence tasks. Furthermore, the criterion can be computed with linear complexity using recently developed Fast Gauss Transform numerical techniques. In addition, we also introduced a new local feature descriptor that was derived from visual saliency principles and which enhanced significantly the performance of the registration algorithm. The resulting technique was subjected to a thorough experimental analysis that highlighted its strength and showed its limitations. Our current applications are in the field of 3D modeling for inspection, surveillance, and biometrics. However, since this matching framework can be applied to any type of data, that can be represented as N-dimensional point-sets, the scope of the method is shown to reach many more pattern analysis applications

    Generalized least squares-based parametric motion estimation and segmentation

    Get PDF
    El análisis del movimiento es uno de los campos más importantes de la visión por computador. Esto es debido a que el mundo real está en continuo movimiento y es obvio que podremos obtener mucha más información de escenas en movimiento que de escenas estáticas. En esta tesis se ha trabajado principalmente en desarrollar algoritmos de estimación de movimiento para su aplicación a problemas de registrado de imágenes y a problemas de segmentación del movimiento. Uno de los principales objetivos de este trabajo es desarrollar una técnica de registrado de imágenes de gran exactitud, tolerante a outliers y que sea capaz de realizar su labor incluso en la presencia de deformaciones de gran magnitud tales como traslaciones, rotaciones, cambios de escala, cambios de iluminación globales y no espacialmente uniformes, etc. Otro de los objetivos de esta tesis es trabajar en problemas de estimación y la segmentación del movimiento en secuencias de dos imágenes de forma casi simultánea y sin conocimiento a priori del número de modelos de movimiento presentes. Los experimentos mostrados en este trabajo demuestran que los algoritmos propuestos en esta tesis obtienen resultados de gran exactitud.This thesis proposes several techniques related with the motion estimation problem. In particular, it deals with global motion estimation for image registration and motion segmentation. In the first case, we will suppose that the majority of the pixels of the image follow the same motion model, although the possibility of a large number of outliers are also considered. In the motion segmentation problem, the presence of more than one motion model will be considered. In both cases, sequences of two consecutive grey level images will be used. A new generalized least squares-based motion estimator will be proposed. The proposed formulation of the motion estimation problem provides an additional constraint that helps to match the pixels using image gradient information. That is achieved thanks to the use of a weight for each observation, providing high weight values to the observations considered as inliers, and low values to the ones considered as outliers. To avoid falling in a local minimum, the proposed motion estimator uses a Feature-based method (SIFT-based) to obtain good initial motion parameters. Therefore, it can deal with large motions like translation, rotations, scales changes, viewpoint changes, etc. The accuracy of our approach has been tested using challenging real images using both affine and projective motion models. Two Motion Estimator techniques, which use M-Estimators to deal with outliers into a iteratively reweighted least squared-based strategy, have been selected to compare the accuracy of our approach. The results obtained have showed that the proposed motion estimator can obtain as accurate results as M-Estimator-based techniques and even better in most cases. The problem of estimating accurately the motion under non-uniform illumination changes will also be considered. A modification of the proposed global motion estimator will be proposed to deal with this kind of illumination changes. In particular, a dynamic image model where the illumination factors are functions of the localization will be used replacing the brightens constancy assumption allowing for a more general and accurate image model. Experiments using challenging images will be performed showing that the combination of both techniques is feasible and provides accurate estimates of the motion parameters even in the presence of strong illumination changes between the images. The last part of the thesis deals with the motion estimation and segmentation problem. The proposed algorithm uses temporal information, by using the proposed generalized least-squares motion estimation process and spatial information by using an iterative region growing algorithm which classifies regions of pixels into the different motion models present in the sequence. In addition, it can extract the different moving regions of the scene while estimating its motion quasi-simultaneously and without a priori information of the number of moving objects in the scene. The performance of the algorithm will be tested on synthetic and real images with multiple objects undergoing different types of motion

    Generalized least squares-based parametric motion estimation and segmentation

    Get PDF
    El análisis del movimiento es uno de los campos más importantes de la visión por computador. Esto es debido a que el mundo real está en continuo movimiento y es obvio que podremos obtener mucha más información de escenas en movimiento que de escenas estáticas. En esta tesis se ha trabajado principalmente en desarrollar algoritmos de estimación de movimiento para su aplicación a problemas de registrado de imágenes y a problemas de segmentación del movimiento. Uno de los principales objetivos de este trabajo es desarrollar una técnica de registrado de imágenes de gran exactitud, tolerante a outliers y que sea capaz de realizar su labor incluso en la presencia de deformaciones de gran magnitud tales como traslaciones, rotaciones, cambios de escala, cambios de iluminación globales y no espacialmente uniformes, etc. Otro de los objetivos de esta tesis es trabajar en problemas de estimación y la segmentación del movimiento en secuencias de dos imágenes de forma casi simultánea y sin conocimiento a priori del número de modelos de movimiento presentes. Los experimentos mostrados en este trabajo demuestran que los algoritmos propuestos en esta tesis obtienen resultados de gran exactitud.This thesis proposes several techniques related with the motion estimation problem. In particular, it deals with global motion estimation for image registration and motion segmentation. In the first case, we will suppose that the majority of the pixels of the image follow the same motion model, although the possibility of a large number of outliers are also considered. In the motion segmentation problem, the presence of more than one motion model will be considered. In both cases, sequences of two consecutive grey level images will be used. A new generalized least squares-based motion estimator will be proposed. The proposed formulation of the motion estimation problem provides an additional constraint that helps to match the pixels using image gradient information. That is achieved thanks to the use of a weight for each observation, providing high weight values to the observations considered as inliers, and low values to the ones considered as outliers. To avoid falling in a local minimum, the proposed motion estimator uses a Feature-based method (SIFT-based) to obtain good initial motion parameters. Therefore, it can deal with large motions like translation, rotations, scales changes, viewpoint changes, etc. The accuracy of our approach has been tested using challenging real images using both affine and projective motion models. Two Motion Estimator techniques, which use M-Estimators to deal with outliers into a iteratively reweighted least squared-based strategy, have been selected to compare the accuracy of our approach. The results obtained have showed that the proposed motion estimator can obtain as accurate results as M-Estimator-based techniques and even better in most cases. The problem of estimating accurately the motion under non-uniform illumination changes will also be considered. A modification of the proposed global motion estimator will be proposed to deal with this kind of illumination changes. In particular, a dynamic image model where the illumination factors are functions of the localization will be used replacing the brightens constancy assumption allowing for a more general and accurate image model. Experiments using challenging images will be performed showing that the combination of both techniques is feasible and provides accurate estimates of the motion parameters even in the presence of strong illumination changes between the images. The last part of the thesis deals with the motion estimation and segmentation problem. The proposed algorithm uses temporal information, by using the proposed generalized least-squares motion estimation process and spatial information by using an iterative region growing algorithm which classifies regions of pixels into the different motion models present in the sequence. In addition, it can extract the different moving regions of the scene while estimating its motion quasi-simultaneously and without a priori information of the number of moving objects in the scene. The performance of the algorithm will be tested on synthetic and real images with multiple objects undergoing different types of motion
    • …
    corecore