71,000 research outputs found

    Robust visual odometry using uncertainty models

    Get PDF
    In dense, urban environments, GPS by itself cannot be relied on to provide accurate positioning information. Signal reception issues (e.g. occlusion, multi-path effects) often prevent the GPS receiver from getting a positional lock, causing holes in the absolute positioning data. In order to keep assisting the driver, other sensors are required to track the vehicle motion during these periods of GPS disturbance. In this paper, we propose a novel method to use a single on-board consumer-grade camera to estimate the relative vehicle motion. The method is based on the tracking of ground plane features, taking into account the uncertainty on their backprojection as well as the uncertainty on the vehicle motion. A Hough-like parameter space vote is employed to extract motion parameters from the uncertainty models. The method is easy to calibrate and designed to be robust to outliers and bad feature quality. Preliminary testing shows good accuracy and reliability, with a positional estimate within 2 metres for a 400 metre elapsed distance. The effects of inaccurate calibration are examined using artificial datasets, suggesting a self-calibrating system may be possible in future work

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    Ground Vehicle Navigation Using Magnetic Field Variation

    Get PDF
    The Earth\u27s magnetic field has been the bedrock of navigation for centuries. The latest research highlights the uniqueness of magnetic field measurements based on position due to large scale variations as well as localized perturbations. These observable changes in the Earth\u27s magnetic field as a function of position provide distinct information which can be used for navigation. This dissertation describes ground vehicle navigation exploiting variation in Earth\u27s magnetic field using a self-contained navigation system consisting of only a magnetometer and magnetic field maps. In order to achieve navigation, effective calibration enables repeatable magnetic field measurements from different vehicles and facilitates mapping of the observable magnetic field as a function of position. A new modified ellipsoid calibration technique for strapdown magnetometers in large vehicles is described, as well as analysis of position measurement generation comparing a multitude of measurement compositions using existing and newly developed likelihood techniques. Finally, navigation solutions are presented

    STRUCTURE-FROM-MOTION FOR CALIBRATION OF A VEHICLE CAMERA SYSTEM WITH NON-OVERLAPPING FIELDS-OF-VIEW IN AN URBAN ENVIRONMENT

    Get PDF
    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements

    Functionalized Transparent Surfaces with Enhanced Self-Cleaning against Ink Aerosol Contamination

    Get PDF
    During operation of a standard commercial inkjet printer, suspended ink particles form an ink aerosol inside the printing chamber that can cause serious malfunctions, including contamination of the transparent window of the printhead position calibration optical sensors. In this work, transparent conducting film (TCF) and surface functionalization through self-assembled monolayer (SAM) are proposed and investigated to repel ink aerosol deposition on a transparent surface and to reduce its adverse effects. The results show that the combination of the Joule heating effect induced by applying an electrical current to the TCF and hydrophobic property of the SAM reduces transmittance loss from an average of 10% to less than 1.5%. Correspondingly, the area of the surface covered by ink decreases from 45.62% ± 6.15% to 1.71% ± 0.25%. The preliminary results are obtained with glass substrates and subsequently extended to the plastic window of a commercial inkjet printer calibration sensor, thus demonstrating the potential of the proposed approach to reduce aerosol contamination in real applications.Peer ReviewedPostprint (author's final draft
    • …
    corecore