33 research outputs found

    Minimal Solvers for Monocular Rolling Shutter Compensation under Ackermann Motion

    Full text link
    Modern automotive vehicles are often equipped with a budget commercial rolling shutter camera. These devices often produce distorted images due to the inter-row delay of the camera while capturing the image. Recent methods for monocular rolling shutter motion compensation utilize blur kernel and the straightness property of line segments. However, these methods are limited to handling rotational motion and also are not fast enough to operate in real time. In this paper, we propose a minimal solver for the rolling shutter motion compensation which assumes known vertical direction of the camera. Thanks to the Ackermann motion model of vehicles which consists of only two motion parameters, and two parameters for the simplified depth assumption that lead to a 4-line algorithm. The proposed minimal solver estimates the rolling shutter camera motion efficiently and accurately. The extensive experiments on real and simulated datasets demonstrate the benefits of our approach in terms of qualitative and quantitative results.Comment: Submitted to WACV 201

    Urban Environment Navigation with Real-Time Data Utilizing Computer Vision, Inertial, and GPS Sensors

    Get PDF
    The purpose of this research was to obtain a navigation solution that used real data, in a degraded or denied global positioning system (GPS) environment, from low cost commercial o the shelf sensors. The sensors that were integrated together were a commercial inertial measurement unit (IMU), monocular camera computer vision algorithm, and GPS. Furthermore, the monocular camera computer vision algorithm had to be robust enough to handle any camera orientation that was presented to it. This research develops a visual odometry 2-D zero velocity measurement that is derived by both the features points that are extracted from a monocular camera and the rotation values given by an IMU. By presenting measurements as a 2-D zero velocity measurements, errors associated with scale, which is unobservable by a monocular camera, can be removed from the measurements. The 2-D zero velocity measurements are represented as two normalized velocity vectors that are orthogonal to the vehicle\u27s direction of travel, and are used to determine the error in the INS\u27s measured velocity vector. This error is produced by knowing which directions the vehicle is not moving, given by the 2-D zero velocity measurements, in and comparing it to the direction of travel the vehicle is thought to be moving in. The performance was evaluated by comparing results that were obtained when different sensor pairings of a commercial IMU, GPS, and monocular computer vision algorithm were used to obtain the vehicle\u27s trajectory. Three separate monocular cameras, that each pointed in a different directions, were tested independently. Finally, the solutions provided by the GPS were degraded (i.e., the number of satellites available from the GPS were limited) to determine the e effectiveness of adding a monocular computer vision algorithm to a system operating with a degraded GPS solution

    Classification and Pose Estimation of Vehicles in Videos by 3D Modeling within Discrete-Continuous Optimization

    Full text link

    Faster than FAST: GPU-Accelerated Frontend for High-Speed VIO

    Full text link
    The recent introduction of powerful embedded graphics processing units (GPUs) has allowed for unforeseen improvements in real-time computer vision applications. It has enabled algorithms to run onboard, well above the standard video rates, yielding not only higher information processing capability, but also reduced latency. This work focuses on the applicability of efficient low-level, GPU hardware-specific instructions to improve on existing computer vision algorithms in the field of visual-inertial odometry (VIO). While most steps of a VIO pipeline work on visual features, they rely on image data for detection and tracking, of which both steps are well suited for parallelization. Especially non-maxima suppression and the subsequent feature selection are prominent contributors to the overall image processing latency. Our work first revisits the problem of non-maxima suppression for feature detection specifically on GPUs, and proposes a solution that selects local response maxima, imposes spatial feature distribution, and extracts features simultaneously. Our second contribution introduces an enhanced FAST feature detector that applies the aforementioned non-maxima suppression method. Finally, we compare our method to other state-of-the-art CPU and GPU implementations, where we always outperform all of them in feature tracking and detection, resulting in over 1000fps throughput on an embedded Jetson TX2 platform. Additionally, we demonstrate our work integrated in a VIO pipeline achieving a metric state estimation at ~200fps.Comment: IEEE International Conference on Intelligent Robots and Systems (IROS), 2020. Open-source implementation available at https://github.com/uzh-rpg/vili
    corecore