48,478 research outputs found

    Multi-Object Tracking with Interacting Vehicles and Road Map Information

    Full text link
    In many applications, tracking of multiple objects is crucial for a perception of the current environment. Most of the present multi-object tracking algorithms assume that objects move independently regarding other dynamic objects as well as the static environment. Since in many traffic situations objects interact with each other and in addition there are restrictions due to drivable areas, the assumption of an independent object motion is not fulfilled. This paper proposes an approach adapting a multi-object tracking system to model interaction between vehicles, and the current road geometry. Therefore, the prediction step of a Labeled Multi-Bernoulli filter is extended to facilitate modeling interaction between objects using the Intelligent Driver Model. Furthermore, to consider road map information, an approximation of a highly precise road map is used. The results show that in scenarios where the assumption of a standard motion model is violated, the tracking system adapted with the proposed method achieves higher accuracy and robustness in its track estimations

    Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System

    Get PDF
    Az http://intechweb.org/ alatti "Books" fĂŒl alatt kell rĂĄkeresni a "Stereo Vision" cĂ­mre Ă©s az 1. fejezetre

    Model-based estimation of off-highway road geometry using single-axis LADAR and inertial sensing

    Get PDF
    This paper applies some previously studied extended Kalman filter techniques for planar road geometry estimation to the domain of autonomous navigation of off-highway vehicles. In this work, a clothoid model of the road geometry is constructed and estimated recursively based on road features extracted from single-axis LADAR range measurements. We present a method for feature extraction of the road centerline in the image plane, and describe its application to recursive estimation of the road geometry. We analyze the performance of our method against simulated motion of varied road geometries and against closed-loop detection, tracking and following of desert roads. Our method accomodates full 6 DOF motion of the vehicle as it navigates, constructs consistent estimates of the road geometry with respect to a fixed global reference frame, and requires an estimate of the sensor pose for each range measurement

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    Ego-motion and Surrounding Vehicle State Estimation Using a Monocular Camera

    Full text link
    Understanding ego-motion and surrounding vehicle state is essential to enable automated driving and advanced driving assistance technologies. Typical approaches to solve this problem use fusion of multiple sensors such as LiDAR, camera, and radar to recognize surrounding vehicle state, including position, velocity, and orientation. Such sensing modalities are overly complex and costly for production of personal use vehicles. In this paper, we propose a novel machine learning method to estimate ego-motion and surrounding vehicle state using a single monocular camera. Our approach is based on a combination of three deep neural networks to estimate the 3D vehicle bounding box, depth, and optical flow from a sequence of images. The main contribution of this paper is a new framework and algorithm that integrates these three networks in order to estimate the ego-motion and surrounding vehicle state. To realize more accurate 3D position estimation, we address ground plane correction in real-time. The efficacy of the proposed method is demonstrated through experimental evaluations that compare our results to ground truth data available from other sensors including Can-Bus and LiDAR
    • 

    corecore