145 research outputs found

    Robust Vehicle Detection and Distance Estimation Under Challenging Lighting Conditions

    Get PDF
    Avoiding high computational costs and calibration issues involved in stereo-vision-based algorithms, this paper proposes real-time monocular-vision-based techniques for simultaneous vehicle detection and inter-vehicle distance estimation, in which the performance and robustness of the system remain competitive, even for highly challenging benchmark datasets. This paper develops a collision warning system by detecting vehicles ahead and, by identifying safety distances to assist a distracted driver, prior to occurrence of an imminent crash. We introduce adaptive global Haar-like features for vehicle detection, tail-light segmentation, virtual symmetry detection, intervehicle distance estimation, as well as an efficient single-sensor multifeature fusion technique to enhance the accuracy and robustness of our algorithm. The proposed algorithm is able to detect vehicles ahead at both day or night and also for short- and long-range distances. Experimental results under various weather and lighting conditions (including sunny, rainy, foggy, or snowy) show that the proposed algorithm outperforms state-of-the-art algorithms

    Context Exploitation in Data Fusion

    Get PDF
    Complex and dynamic environments constitute a challenge for existing tracking algorithms. For this reason, modern solutions are trying to utilize any available information which could help to constrain, improve or explain the measurements. So called Context Information (CI) is understood as information that surrounds an element of interest, whose knowledge may help understanding the (estimated) situation and also in reacting to that situation. However, context discovery and exploitation are still largely unexplored research topics. Until now, the context has been extensively exploited as a parameter in system and measurement models which led to the development of numerous approaches for the linear or non-linear constrained estimation and target tracking. More specifically, the spatial or static context is the most common source of the ambient information, i.e. features, utilized for recursive enhancement of the state variables either in the prediction or the measurement update of the filters. In the case of multiple model estimators, context can not only be related to the state but also to a certain mode of the filter. Common practice for multiple model scenarios is to represent states and context as a joint distribution of Gaussian mixtures. These approaches are commonly referred as the join tracking and classification. Alternatively, the usefulness of context was also demonstrated in aiding the measurement data association. Process of formulating a hypothesis, which assigns a particular measurement to the track, is traditionally governed by the empirical knowledge of the noise characteristics of sensors and operating environment, i.e. probability of detection, false alarm, clutter noise, which can be further enhanced by conditioning on context. We believe that interactions between the environment and the object could be classified into actions, activities and intents, and formed into structured graphs with contextual links translated into arcs. By learning the environment model we will be able to make prediction on the target\u2019s future actions based on its past observation. Probability of target future action could be utilized in the fusion process to adjust tracker confidence on measurements. By incorporating contextual knowledge of the environment, in the form of a likelihood function, in the filter measurement update step, we have been able to reduce uncertainties of the tracking solution and improve the consistency of the track. The promising results demonstrate that the fusion of CI brings a significant performance improvement in comparison to the regular tracking approaches

    Vehicle recognition and tracking using a generic multi-sensor and multi-algorithm fusion approach

    Get PDF
    International audienceThis paper tackles the problem of improving the robustness of vehicle detection for Adaptive Cruise Control (ACC) applications. Our approach is based on a multisensor and a multialgorithms data fusion for vehicle detection and recognition. Our architecture combines two sensors: a frontal camera and a laser scanner. The improvement of the robustness stems from two aspects. First, we addressed the vision-based detection by developing an original approach based on fine gradient analysis, enhanced with a genetic AdaBoost-based algorithm for vehicle recognition. Then, we use the theory of evidence as a fusion framework to combine confidence levels delivered by the algorithms in order to improve the classification 'vehicle versus non-vehicle'. The final architecture of the system is very modular, generic and flexible in that it could be used for other detection applications or using other sensors or algorithms providing the same outputs. The system was successfully implemented on a prototype vehicle and was evaluated under real conditions and over various multisensor databases and various test scenarios, illustrating very good performances

    Fusion at Detection Level for Frontal Object Perception

    Get PDF
    International audienceIntelligent vehicle perception involves the correct detection and tracking of moving objects. Taking into account all the possible information at early levels of the perception task can improve the final model of the environment. In this paper, we present an evidential fusion framework to represent and combine evidence from multiple lists of sensor detections. Our fusion framework considers the position, shape and appearance information to represent, associate and combine sensor detections. Although our approach takes place at detection level, we propose a general architecture to include it as a part of a whole perception solution. Several experiments were conducted using real data from a vehicle demonstrator equipped with three main sensors: lidar, radar and camera. The obtained results show improvements regarding the reduction of false detections and mis-classifications of moving objects

    Fusion at Detection Level for Frontal Object Perception

    No full text
    International audienceIntelligent vehicle perception involves the correct detection and tracking of moving objects. Taking into account all the possible information at early levels of the perception task can improve the final model of the environment. In this paper, we present an evidential fusion framework to represent and combine evidence from multiple lists of sensor detections. Our fusion framework considers the position, shape and appearance information to represent, associate and combine sensor detections. Although our approach takes place at detection level, we propose a general architecture to include it as a part of a whole perception solution. Several experiments were conducted using real data from a vehicle demonstrator equipped with three main sensors: lidar, radar and camera. The obtained results show improvements regarding the reduction of false detections and mis-classifications of moving objects

    Sensor Fusion for Object Detection and Tracking in Autonomous Vehicles

    Get PDF
    Autonomous driving vehicles depend on their perception system to understand the environment and identify all static and dynamic obstacles surrounding the vehicle. The perception system in an autonomous vehicle uses the sensory data obtained from different sensor modalities to understand the environment and perform a variety of tasks such as object detection and object tracking. Combining the outputs of different sensors to obtain a more reliable and robust outcome is called sensor fusion. This dissertation studies the problem of sensor fusion for object detection and object tracking in autonomous driving vehicles and explores different approaches for utilizing deep neural networks to accurately and efficiently fuse sensory data from different sensing modalities. In particular, this dissertation focuses on fusing radar and camera data for 2D and 3D object detection and object tracking tasks. First, the effectiveness of radar and camera fusion for 2D object detection is investigated by introducing a radar region proposal algorithm for generating object proposals in a two-stage object detection network. The evaluation results show significant improvement in speed and accuracy compared to a vision-based proposal generation method. Next, radar and camera fusion is used for the task of joint object detection and depth estimation where the radar data is used in conjunction with image features to generate object proposals, but also provides accurate depth estimation for the detected objects in the scene. A fusion algorithm is also proposed for 3D object detection where where the depth and velocity data obtained from the radar is fused with the camera images to detect objects in 3D and also accurately estimate their velocities without requiring any temporal information. Finally, radar and camera sensor fusion is used for 3D multi-object tracking by introducing an end-to-end trainable and online network capable of tracking objects in real-time

    Sensor fusion in smart camera networks for ambient intelligence

    Get PDF
    This short report introduces the topics of PhD research that was conducted on 2008-2013 and was defended on July 2013. The PhD thesis covers sensor fusion theory, gathers it into a framework with design rules for fusion-friendly design of vision networks, and elaborates on the rules through fusion experiments performed with four distinct applications of Ambient Intelligence

    Large-Scale Traffic Flow Prediction Using Deep Learning in the Context of Smart Mobility

    Get PDF
    Designing and developing a new generation of cities around the world (termed as smart cities) is fast becoming one of the ultimate solutions to overcome cities' problems such as population growth, pollution, energy crisis, and pressure demand on existing transportation infrastructure. One of the major aspects of a smart city is smart mobility. Smart mobility aims at improving transportation systems in several aspects: city logistics, info-mobility, and people-mobility. The emergence of the Internet of Car (IoC) phenomenon alongside with the development of Intelligent Transportation Systems (ITSs) opens some opportunities in improving the tra c management systems and assisting the travelers and authorities in their decision-making process. However, this has given rise to the generation of huge amount of data originated from human-device and device-device interaction. This is an opportunity and a challenge, and smart mobility will not meet its full potential unless valuable insights are extracted from these big data. Although the smart city environment and IoC allow for the generation and exchange of large amounts of data, there have not been yet well de ned and mature approaches for mining this wealth of information to bene t the drivers and traffic authorities. The main reason is most likely related to fundamental challenges in dealing with big data of various types and uncertain frequency coming from diverse sources. Mainly, the issues of types of data and uncertainty analysis in the predictions are indicated as the most challenging areas of study that have not been tackled yet. Important issues such as the nature of the data, i.e., stationary or non-stationary, and the prediction tasks, i.e., short-term or long-term, should also be taken into consideration. Based on this observation, a data-driven traffic flow prediction framework within the context of big data environment is proposed in this thesis. The main goal of this framework is to enhance the quality of traffic flow predictions, which can be used to assist travelers and traffic authorities in the decision-making process (whether for travel or management purposes). The proposed framework is focused around four main aspects that tackle major data-driven traffic flow prediction problems: the fusion of hard data for traffic flow prediction; the fusion of soft data for traffic flow prediction; prediction of non-stationary traffic flow; and prediction of multi-step traffic flow. All these aspects are investigated and formulated as computational based tools/algorithms/approaches adequately tailored to the nature of the data at hand. The first tool tackles the inherent big data problems and deals with the uncertainty in the prediction. It relies on the ability of deep learning approaches in handling huge amounts of data generated by a large-scale and complex transportation system with limited prior knowledge. Furthermore, motivated by the close correlation between road traffic and weather conditions, a novel deep-learning-based approach that predicts traffic flow by fusing the traffic history and weather data is proposed. The second tool fuses the streams of data (hard data) and event-based data (soft data) using Dempster Shafer Evidence Theory (DSET). One of the main features of the DSET is its ability to capture uncertainties in probabilities. Subsequently, an extension of DSET, namely Dempsters conditional rules for updating belief, is used to fuse traffic prediction beliefs coming from streams of data and event-based data sources. The third tool consists of a method to detect non-stationarities in the traffic flow and an algorithm to perform online adaptations of the tra c prediction model. The proposed detection approach is developed by monitoring the evolution of the spectral contents of the traffic flow. Furthermore, the approach is specfi cally developed to work in conjunction with state-of-the-art machine learning methods such as Deep Neural Network (DNN). By combining the power of frequency domain features and the known generalization capability and scalability of DNN in handling real-world data, it is expected that high prediction performances can be achieved. The last tool is developed to improve multi-step traffic flow prediction in the recursive and multi-output settings. In the recursive setting, an algorithm that augments the information about the current time-step is proposed. This algorithm is called Conditional Data as Demonstrator (C-DaD) and is an extension of an algorithm called Data as Demonstrator (DaD). Furthermore, in the multi-output setting, a novel approach of generating new history-future pairs of data that are aggregated with the original training data using Conditional Generative Adversarial Network (C-GAN) is developed. To demonstrate the capabilities of the proposed approaches, a series of experiments using arti cial and real-world data are conducted. Each of the proposed approaches is compared with the state-of-the-art or currently existing approaches
    • …
    corecore