443 research outputs found

    Development of a Visual Odometry System as a Location Aid for Self-Driving Cars

    Get PDF
    Conocer la posición exacta que ocupa un robot y la trayectoria que describe es esencial en el ámbito de la automoción. Durante años se han desarrollado distintos sensores y técnicas para este cometido que se estudian a lo largo del trabajo. En este proyecto se utilizan dos cámaras a bordo del vehículo como sensores de percepción del entorno. Se propone un algoritmo basado únicamente en odometría visual, es decir, analizando la secuencia de imágenes captadas por las cámaras, sin conocimiento previo del entorno y sin el uso de otros sensores, se pretende obtener una estimación real de la posición y orientación del vehículo. Dicha propuesta se ha validado en el dataset de KITTI y se ha comparado con otras técnicas de odometría visual existentes en el estado del arteKnowing the exact position occupied by a robot and the trajectory it describes is essential in the automotive field. Some techniques and sensors have been developed over the years for this purpose which are studied in this work. In this project, two cameras on board the vehicle are used as sensors for the environment perception. The proposed algorithm is based only on visual odometry, it means, using the sequence of images captured by the cameras, without prior knowledge of the environment and without the use of other sensors. The aim is to obtain a real estimation of the position and orientation of the vehicle. This proposal has been validated on the KITTI benchmark and compared with other Visual Odometry techniques existing in the state of the art.Grado en Ingeniería en Electrónica y Automática Industria

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    NICP: Dense normal based point cloud registration

    Get PDF
    In this paper we present a novel on-line method to recursively align point clouds. By considering each point together with the local features of the surface (normal and curvature), our method takes advantage of the 3D structure around the points for the determination of the data association between two clouds. The algorithm relies on a least squares formulation of the alignment problem, that minimizes an error metric depending on these surface characteristics. We named the approach Normal Iterative Closest Point (NICP in short). Extensive experiments on publicly available benchmark data show that NICP outperforms other state-of-the-art approaches

    Visual Odometry Estimation Using Selective Features

    Get PDF
    The rapid growth in computational power and technology has enabled the automotive industry to do extensive research into autonomous vehicles. So called self- driven cars are seen everywhere, being developed from many companies like, Google, Mercedes Benz, Delphi, Tesla, Uber and many others. One of the challenging tasks for these vehicles is to track incremental motion in runtime and to analyze surroundings for accurate localization. This crucial information is used by many internal systems like active suspension control, autonomous steering, lane change assist and many such applications. All these systems rely on incremental motion to infer logical conclusions. Measurement of incremental change in pose or perspective, in other words, changes in motion, measured using visual only information is called Visual Odometry. This thesis proposes an approach to solve the Visual Odometry problem by using stereo-camera vision to incrementally estimate the pose of a vehicle by examining changes that motion induces on the background in the frame captured from stereo cameras. The approach in this thesis research uses a selective feature based motion tracking method to track the motion of the vehicle by analyzing the motion of its static surroundings and discarding the motion induced by dynamic background (outliers). The proposed approach considers that the surrounding may have moving objects like a truck, a car or a pedestrian body which has its own motion which may be different with respect to the vehicle. Use of stereo camera adds depth information which provides more crucial information necessary for detecting and rejecting outliers. Refining the interest point location using sinusoidal interpolation further increases the accuracy of the motion estimation results. The results show that by using a process that chooses features only on the static background and by tracking these features accurately, robust semantic information can be obtained

    Semantic Visual Localization

    Full text link
    Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes
    • …
    corecore