3 research outputs found

    Multi-Sensor Data Fusion for Robust Environment Reconstruction in Autonomous Vehicle Applications

    Get PDF
    In autonomous vehicle systems, understanding the surrounding environment is mandatory for an intelligent vehicle to make every decision of movement on the road. Knowledge about the neighboring environment enables the vehicle to detect moving objects, especially irregular events such as jaywalking, sudden lane change of the vehicle etc. to avoid collision. This local situation awareness mostly depends on the advanced sensors (e.g. camera, LIDAR, RADAR) added to the vehicle. The main focus of this work is to formulate a problem of reconstructing the vehicle environment using point cloud data from the LIDAR and RGB color images from the camera. Based on a widely used point cloud registration tool such as iterated closest point (ICP), an expectation-maximization (EM)-ICP technique has been proposed to automatically mosaic multiple point cloud sets into a larger one. Motion trajectories of the moving objects are analyzed to address the issue of irregularity detection. Another contribution of this work is the utilization of fusion of color information (from RGB color images captured by the camera) with the three-dimensional point cloud data for better representation of the environment. For better understanding of the surrounding environment, histogram of oriented gradient (HOG) based techniques are exploited to detect pedestrians and vehicles.;Using both camera and LIDAR, an autonomous vehicle can gather information and reconstruct the map of the surrounding environment up to a certain distance. Capability of communicating and cooperating among vehicles can improve the automated driving decisions by providing extended and more precise view of the surroundings. In this work, a transmission power control algorithm is studied along with the adaptive content control algorithm to achieve a more accurate map of the vehicle environment. To exchange the local sensor data among the vehicles, an adaptive communication scheme is proposed that controls the lengths and the contents of the messages depending on the load of the communication channel. The exchange of this information can extend the tracking region of a vehicle beyond the area sensed by its own sensors. In this experiment, a combined effect of power control, and message length and content control algorithm is exploited to improve the map\u27s accuracy of the surroundings in a cooperative automated vehicle system

    Real-time vehicle detection using low-cost sensors

    Get PDF
    Improving road safety and reducing the number of accidents is one of the top priorities for the automotive industry. As human driving behaviour is one of the top causation factors of road accidents, research is working towards removing control from the human driver by automating functions and finally introducing a fully Autonomous Vehicle (AV). A Collision Avoidance System (CAS) is one of the key safety systems for an AV, as it ensures all potential threats ahead of the vehicle are identified and appropriate action is taken. This research focuses on the task of vehicle detection, which is the base of a CAS, and attempts to produce an effective vehicle detector based on the data coming from a low-cost monocular camera. Developing a robust CAS based on low-cost sensor is crucial to bringing the cost of safety systems down and in this way, increase their adoption rate by end users. In this work, detectors are developed based on the two main approaches to vehicle detection using a monocular camera. The first is the traditional image processing approach where visual cues are utilised to generate potential vehicle locations and at a second stage, verify the existence of vehicles in an image. The second approach is based on a Convolutional Neural Network, a computationally expensive method that unifies the detection process in a single pipeline. The goal is to determine which method is more appropriate for real-time applications. Following the first approach, a vehicle detector based on the combination of HOG features and SVM classification is developed. The detector attempts to optimise performance by modifying the detection pipeline and improve run-time performance. For the CNN-based approach, six different network models are developed and trained end to end using collected data, each with a different network structure and parameters, in an attempt to determine which combination produces the best results. The evaluation of the different vehicle detectors produced some interesting findings; the first approach did not manage to produce a working detector, while the CNN-based approach produced a high performing vehicle detector with an 85.87% average precision and a very low miss rate. The detector managed to perform well under different operational environments (motorway, urban and rural roads) and the results were validated using an external dataset. Additional testing of the vehicle detector indicated it is suitable as a base for safety applications such as CAS, with a run time performance of 12FPS and potential for further improvements.</div

    Detecci贸n de peatones en la noche usando Faster R-CNN e im谩genes infrarrojas

    Get PDF
    En este art铆culo se presenta un sistema de detecci贸n de peatones en la noche, para aplicaciones en seguridad vehicular. Para este desarrollo se ha analizado el desempe帽o del algoritmo Faster R-CNN con im谩genes en el infrarrojo lejano. Por lo que se constat贸 que presenta inconvenientes a la hora de detectar peatones a larga distancia. En consecuencia, se presenta una nueva arquitectura Faster R-CNN dedicada a la detecci贸n en m煤ltiples escalas, mediante dos generadores de regiones de inter茅s (ROI) dedicados a peatones a corta y larga distancia, denominados RPNCD y RPNLD, respectivamente. Esta arquitectura ha sido comparada con los modelos para Faster RCNN que han presentado los mejores resultados, como son VGG-16 y Resnet 101. Los resultados experimentales se han desarrollado sobre las bases de datos CVC-09 y LSIFIR, los cuales demostraron mejoras, especialmente en la detecci贸n de peatones a larga distancia, presentando una tasa de error versus FPPI de 16 % y sobre la curva Precisi贸n vs. Recall un AP de 89,85 % para la clase peat贸n y un mAP de 90 % sobre el conjunto de pruebas de las bases de datos LSIFIR y CVC-09.// This paper presents a system for pedestrian detection at nighttime conditions for vehicular safety applications. For this purpose, it analyzes the performance of the Faster R-CNN algorithm for infrared images.The research reveals that Faster R-CNN has problems to detect small scale pedestrians. For this reason, it introduces a new Faster R-CNN architecture focused on multi-scale detection, through two ROI鈥檚 generators for large size and small size pedestrians, RPNCD and RPNLD respectively. This architecture has been compared with the best Faster R-CNN baseline models, VGG-16 and Resnet 101, which present the best results. The experimental results have been development on CVC-09 and LSIFIR databases, which show improvements specially when detecting pedestrians that are far away, over the DET curve presents the miss rate versus FPPI of 16 and over the Precision vs Recall the AP of 89.85 for pedestrian class and the mAP of 90 over LSIFIR and CVC-09 test sets
    corecore