22 research outputs found

    Enhancing Vehicular Perception: A Comprehensive Analysis of Sensor Fusion Performance through Weighted Averages and Fuzzy C-Means for Optimal Data Association

    Get PDF
    This work explores the implementation of sensor fusion and data association for autonomous vehicle design. Advancements in Adaptive Driver Assistance System (ADAS) technology have driven the development of perception algorithms required for higher levels of autonomy in vehicles. Perception algorithms process data collected from radar, camera, and LiDAR sensors to generate a complete model of the ego vehicle’s surrounding environment. Fusion of data from these sensors is important for accurate measurement of longitudinal and lateral distances to surrounding objects. Sensor fusion associates sensor detections to each other through different data association techniques. Data association techniques can consist of independent assignment of sensor detections, the Hungarian algorithm, or clustering algorithms such as Fuzzy C-Means (FCM). One baseline sensor fusion technique is a simple weighted average, which can yield satisfactory accuracy. The goal of this work is to evaluate the performance of sensor fusion utilizing a weighted average with an advanced Fuzzy C-Means data association algorithm. The results are applied to a modified Chevrolet Blazer, used by WVU for the EcoCAR Mobility Challenge (EMC) Year 4 competition. The secondary goal of this work is to implement FCM for the use of data association and compare the performance to the same initial sensor fusion design. For weighted average testing, real-world sensor data from the Intel Mobileye 630 camera and Bosch Mid-Range Radar are used to evaluate different static weights. The results from the static weights are utilized to create dynamic weights for the weighted average and the performance of static weights and dynamic weights are compared. For data association testing, simulated sensor data from camera and radar detection models are used to evaluate the detection association performance of the baseline sensor fusion to FCM implemented sensor fusion. Results show dynamic weights improved the baseline sensor fusion’s longitudinal distance error by 6.80% for approach tests and 5.21% for departure tests. Results for data association testing showed the baseline sensor fusion had an average accuracy of 66.65% and FCM implemented had 51.65% for 100% probability of detection, but for 25% probability of detection, baseline was 20.19% and FCM was 40.81%. Recommendations are made to improve the performance of the weighted average for more accuracy in longitudinal distance and expanding the FCM research to utilizing real-world sensor data

    Provident vehicle detection at night for advanced driver assistance systems

    Get PDF
    In recent years, computer vision algorithms have become more powerful, which enabled technologies such as autonomous driving to evolve rapidly. However, current algorithms mainly share one limitation: They rely on directly visible objects. This is a significant drawback compared to human behavior, where visual cues caused by objects (e. g., shadows) are already used intuitively to retrieve information or anticipate occurring objects. While driving at night, this performance deficit becomes even more obvious: Humans already process the light artifacts caused by the headlamps of oncoming vehicles to estimate where they appear, whereas current object detection systems require that the oncoming vehicle is directly visible before it can be detected. Based on previous work on this subject, in this paper, we present a complete system that can detect light artifacts caused by the headlights of oncoming vehicles so that it detects that a vehicle is approaching providently (denoted as provident vehicle detection). For that, an entire algorithm architecture is investigated, including the detection in the image space, the three-dimensional localization, and the tracking of light artifacts. To demonstrate the usefulness of such an algorithm, the proposed algorithm is deployed in a test vehicle to use the detected light artifacts to control the glare-free high beam system proactively (react before the oncoming vehicle is directly visible). Using this experimental setting, the provident vehicle detection system’s time benefit compared to an in-production computer vision system is quantified. Additionally, the glare-free high beam use case provides a real-time and real-world visualization interface of the detection results by considering the adaptive headlamps as projectors. With this investigation of provident vehicle detection, we want to put awareness on the unconventional sensing task of detecting objects providently (detection based on observable visual cues the objects cause before they are visible) and further close the performance gap between human behavior and computer vision algorithms to bring autonomous and automated driving a step forward

    RH-Map: Online Map Construction Framework of Dynamic Objects Removal Based on Region-wise Hash Map Structure

    Full text link
    Mobile robots navigating in outdoor environments frequently encounter the issue of undesired traces left by dynamic objects and manifested as obstacles on map, impeding robots from achieving accurate localization and effective navigation. To tackle the problem, a novel map construction framework based on 3D region-wise hash map structure (RH-Map) is proposed, consisting of front-end scan fresher and back-end removal modules, which realizes real-time map construction and online dynamic object removal (DOR). First, a two-layer 3D region-wise hash map structure of map management is proposed for effective online DOR. Then, in scan fresher, region-wise ground plane estimation (R-GPE) is adopted for estimating and preserving ground information and Scan-to-Map Removal (S2M-R) is proposed to discriminate and remove dynamic regions. Moreover, the lightweight back-end removal module maintaining keyframes is proposed for further DOR. As experimentally verified on SemanticKITTI, our proposed framework yields promising performance on online DOR of map construction compared with the state-of-the-art methods. And we also validate the proposed framework in real-world environments

    Visual computing techniques for automated LIDAR annotation with application to intelligent transport systems

    Get PDF
    106 p.The concept of Intelligent Transport Systems (ITS) refers to the application of communication and information technologies to transport with the aim of making it more efficient, sustainable, and safer. Computer vision is increasingly being used for ITS applications, such as infrastructure management or advanced driver-assistance systems. The latest progress in computer vision, thanks to the Deep Learning techniques, and the race for autonomous vehicle, have created a growing requirement for annotated data in the automotive industry. The data to be annotated is composed by images captured by the cameras of the vehicles and LIDAR data in the form of point clouds. LIDAR sensors are used for tasks such as object detection and localization. The capacity of LIDAR sensors to identify objects at long distances and to provide estimations of their distance make them very appealing sensors for autonomous driving.This thesis presents a method to automate the annotation of lane markings with LIDAR data. The state of the art of lane markings detection based on LIDAR data is reviewed and a novel method is presented. The precision of the method is evaluated against manually annotated data. Its usefulness is also evaluated, measuring the reduction of the required time to annotate new data thanks to the automatically generated pre-annotations. Finally, the conclusions of this thesis and possible future research lines are presented

    On Compositional Hierarchical Models for holistic Lane and Road Perception in Intelligent Vehicles

    Get PDF
    This work is a contribution to the vision based perception of multi lane roads of urban intersections. Given multiple input features the proposed probabilistic hierarchical model infers the lane structure as well as the location of stoplines and the turn directions of individual lanes. Thereby, it expresses prior expectations on the road topology using weak probabilistic constraints which allows for the detection of parallel lanes as well as splitting and merging lanes

    Visual computing techniques for automated LIDAR annotation with application to intelligent transport systems

    Get PDF
    106 p.The concept of Intelligent Transport Systems (ITS) refers to the application of communication and information technologies to transport with the aim of making it more efficient, sustainable, and safer. Computer vision is increasingly being used for ITS applications, such as infrastructure management or advanced driver-assistance systems. The latest progress in computer vision, thanks to the Deep Learning techniques, and the race for autonomous vehicle, have created a growing requirement for annotated data in the automotive industry. The data to be annotated is composed by images captured by the cameras of the vehicles and LIDAR data in the form of point clouds. LIDAR sensors are used for tasks such as object detection and localization. The capacity of LIDAR sensors to identify objects at long distances and to provide estimations of their distance make them very appealing sensors for autonomous driving.This thesis presents a method to automate the annotation of lane markings with LIDAR data. The state of the art of lane markings detection based on LIDAR data is reviewed and a novel method is presented. The precision of the method is evaluated against manually annotated data. Its usefulness is also evaluated, measuring the reduction of the required time to annotate new data thanks to the automatically generated pre-annotations. Finally, the conclusions of this thesis and possible future research lines are presented

    Road terrain detection for Advanced Driver Assistance Systems

    Get PDF
    Kühnl T. Road terrain detection for Advanced Driver Assistance Systems. Bielefeld: Bielefeld University; 2013

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system
    corecore