21 research outputs found

    KEYFRAME-BASED VISUAL-INERTIAL SLAM USING NONLINEAR OPTIMIZATION

    Get PDF
    Abstract—The fusion of visual and inertial cues has become popular in robotics due to the complementary nature of the two sensing modalities. While most fusion strategies to date rely on filtering schemes, the visual robotics community has recently turned to non-linear optimization approaches for tasks such as visual Simultaneous Localization And Mapping (SLAM), following the discovery that this comes with significant advantages in quality of performance and computational complexity. Following this trend, we present a novel approach to tightly integrate visual measurements with readings from an Inertial Measurement Unit (IMU) in SLAM. An IMU error term is integrated with the landmark reprojection error in a fully probabilistic manner, resulting to a joint non-linear cost function to be optimized. Employing the powerful concept of ‘keyframes ’ we partially marginalize old states to maintain a bounded-sized optimization window, ensuring real-time operation. Comparing against both vision-only and loosely-coupled visual-inertial algorithms, our experiments confirm the benefits of tight fusion in terms of accuracy and robustness. I

    Continuous-Time Estimation of Attitude Using B-Splines on Lie Groups

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/140656/1/1.g001149.pd

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    Extensions to the Visual Odometry Pipeline for the Exploration of Planetary Surfaces

    No full text
    Mars represents one of the most important targets for space exploration in the next 10 to 30 years, particularly because of evidence of liquid water in the planet's past. Current environmental conditions dictate that any existing water reserves will be in the form of ice; finding and sampling these ice deposits would further the study of the planet's climate history, further the search for evidence of life, and facilitate in-situ resource utilization during future manned exploration missions. This thesis presents a suite of algorithms to help enable a robotic ice-prospecting mission to Mars. Starting from visual odometry---the estimation of a rover's motion using a stereo camera as the primary sensor---we develop the following extensions: (i) a coupled surface/subsurface modelling system that provides novel data products to scientists working remotely, (ii) an autonomous retrotraverse system that allows a rover to return to previously visited places along a route for sampling, or to return a sample to an ascent vehicle, and (iii) the extension of the appearance-based visual odometry pipeline to an actively illuminated light detection and ranging sensor that provides data similar to a stereo camera but is not reliant on consistent ambient lighting, thereby enabling appearance-based vision techniques to be used in environments that are not conducive to passive cameras, such as underground mines or permanently shadowed craters on the moon. All algorithms are evaluated on real data collected using our field robot at the University of Toronto Institute for Aerospace Studies, or at a planetary analogue site on Devon Island, in the Canadian High Arctic.Ph

    OpenGV: A Unified and Generalized Approach to Calibrated Geometric Vision

    No full text
    OpenGV is a new C++ library for calibrated realtime 3D geometric vision. It unifies both central and non-central absolute and relative camera pose computation algorithms within a single library. Each problem type comes with minimal and non-minimal closed-form solvers, as well as non-linear iterative optimization and robust sample consensus methods. OpenGV therefore contains an unprecedented level of completeness with regard to calibrated geometric vision algorithms, and it is the first library with a dedicated focus on a unified real-time usage of non-central multi-camera systems, which are increasingly popular in robotics and in the automotive industry. This paper introduces OpenGV's flexible interface and abstraction for multi-camera systems, and outlines the performance of all contained algorithms. It is our hope that the introduction of this open-source platform will motivate people to use it and potentially also include more algorithms, which would further contribute to the general accessibility of geometric vision algorithms, and build a common playground for the fair comparison of different solutions

    Using Multi-Camera Systems in Robotics: Efficient Solutions to the NPnP Problem

    No full text
    This paper introduces two novel solutions to the generalized-camera exterior orientation problem, which has a vast number of potential applications in robotics: (i) a minimal solution requiring only three point correspondences, and (ii) gPnP, an efficient, non-iterative n-point solution with linear complexity in the number of points. Already existing minimal solutions require exhaustive algebraic derivations. In contrast, our novel minimal solution is solved in a straightforward manner using the Gröbner basis method. Existing n-point solutions are mostly based on iterative optimization schemes. Our n-point solution is non-iterative and outperforms existing algorithms in terms of computational efficiency. Our results present an evaluation against state-of-the-art single-camera algorithms, and a comparison of different multi-camera setups. It demonstrates the superior noise resilience achieved when using multi-camera configurations, and the efficiency of our algorithms. As a further contribution, we illustrate a possible robotic use-case of our non-perspective orientation computation algorithms by presenting visual odometry results on real data with a non-overlapping multi-camera configuration, including a comparison to a loosely coupled alternative

    Using Multi-Camera Systems in Robotics: Efficient Solutions to the NPnP Problem

    No full text

    Self-supervised calibration for robotic systems

    No full text

    Rolling Shutter Camera Calibration

    No full text
    Rolling Shutter (RS) cameras are used across a wide range of consumer electronic devices-from smart-phones to high-end cameras. It is well known, that if a RS camera is used with a moving camera or scene, significant image distortions are introduced. Th
    corecore