977 research outputs found

    Real Time Lidar and Radar High-Level Fusion for Obstacle Detection and Tracking with evaluation on a ground truth

    Get PDF
    20th International Conference on Automation, Robotics and Applications Lisbon sept 24-25, 2018— Both Lidars and Radars are sensors for obstacle detection. While Lidars are very accurate on obstacles positions and less accurate on their velocities, Radars are more precise on obstacles velocities and less precise on their positions. Sensor fusion between Lidar and Radar aims at improving obstacle detection using advantages of the two sensors. The present paper proposes a real-time Lidar/Radar data fusion algorithm for obstacle detection and tracking based on the global nearest neighbour standard filter (GNN). This algorithm is implemented and embedded in an automative vehicle as a component generated by a real-time multisensor software. The benefits of data fusion comparing with the use of a single sensor are illustrated through several tracking scenarios (on a highway and on a bend) and using real-time kinematic sensors mounted on the ego and tracked vehicles as a ground truth

    Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking

    No full text
    International audience—The accurate detection and classification of moving objects is a critical aspect of Advanced Driver Assistance Systems (ADAS). We believe that by including the objects classification from multiple sensors detections as a key component of the object's representation and the perception process, we can improve the perceived model of the environment. First, we define a composite object representation to include class information in the core object's description. Second , we propose a complete perception fusion architecture based on the Evidential framework to solve the Detection and Tracking of Moving Objects (DATMO) problem by integrating the composite representation and uncertainty management. Finally, we integrate our fusion approach in a real-time application inside a vehicle demonstrator from the interactIVe IP European project which includes three main sensors: radar, lidar and camera. We test our fusion approach using real data from different driving scenarios and focusing on four objects of interest: pedestrian, bike, car and truck

    Fusion at Detection Level for Frontal Object Perception

    Get PDF
    International audienceIntelligent vehicle perception involves the correct detection and tracking of moving objects. Taking into account all the possible information at early levels of the perception task can improve the final model of the environment. In this paper, we present an evidential fusion framework to represent and combine evidence from multiple lists of sensor detections. Our fusion framework considers the position, shape and appearance information to represent, associate and combine sensor detections. Although our approach takes place at detection level, we propose a general architecture to include it as a part of a whole perception solution. Several experiments were conducted using real data from a vehicle demonstrator equipped with three main sensors: lidar, radar and camera. The obtained results show improvements regarding the reduction of false detections and mis-classifications of moving objects

    Fusion at Detection Level for Frontal Object Perception

    No full text
    International audienceIntelligent vehicle perception involves the correct detection and tracking of moving objects. Taking into account all the possible information at early levels of the perception task can improve the final model of the environment. In this paper, we present an evidential fusion framework to represent and combine evidence from multiple lists of sensor detections. Our fusion framework considers the position, shape and appearance information to represent, associate and combine sensor detections. Although our approach takes place at detection level, we propose a general architecture to include it as a part of a whole perception solution. Several experiments were conducted using real data from a vehicle demonstrator equipped with three main sensors: lidar, radar and camera. The obtained results show improvements regarding the reduction of false detections and mis-classifications of moving objects

    An Empirical Evaluation of Deep Learning on Highway Driving

    Full text link
    Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.Comment: Added a video for lane detectio

    Radar/Lidar Sensor Fusion for Car-Following on Highways

    Get PDF
    We present a real-time algorithm which enables an autonomous car to comfortably follow other cars at various speeds while keeping a safe distance. We focus on highway scenarios. A velocity and distance regulation approach is presented that depends on the position as well as the velocity of the followed car. Radar sensors provide reliable information on straight lanes, but fail in curves due to their restricted field of view. On the other hand, Lidar sensors are able to cover the regions of interest in almost all situations, but do not provide precise speed information. We combine the advantages of both sensors with a sensor fusion approach in order to provide permanent and precise spatial and dynamical data. Our results in highway experiments with real traffic will be described in detail

    Vehicle recognition and tracking using a generic multi-sensor and multi-algorithm fusion approach

    Get PDF
    International audienceThis paper tackles the problem of improving the robustness of vehicle detection for Adaptive Cruise Control (ACC) applications. Our approach is based on a multisensor and a multialgorithms data fusion for vehicle detection and recognition. Our architecture combines two sensors: a frontal camera and a laser scanner. The improvement of the robustness stems from two aspects. First, we addressed the vision-based detection by developing an original approach based on fine gradient analysis, enhanced with a genetic AdaBoost-based algorithm for vehicle recognition. Then, we use the theory of evidence as a fusion framework to combine confidence levels delivered by the algorithms in order to improve the classification 'vehicle versus non-vehicle'. The final architecture of the system is very modular, generic and flexible in that it could be used for other detection applications or using other sensors or algorithms providing the same outputs. The system was successfully implemented on a prototype vehicle and was evaluated under real conditions and over various multisensor databases and various test scenarios, illustrating very good performances
    • …
    corecore