953 research outputs found
Sensor fusion methodology for vehicle detection
A novel sensor fusion methodology is presented, which provides intelligent vehicles with augmented environment information and knowledge, enabled by vision-based system, laser sensor and global positioning system. The presented approach achieves safer roads by data fusion techniques, especially in single-lane carriage-ways where casualties are higher than in other road classes, and focuses on the interplay between vehicle drivers and intelligent vehicles. The system is based on the reliability of laser scanner for obstacle detection, the use of camera based identification techniques and advanced tracking and data association algorithms i.e. Unscented Kalman Filter and Joint Probabilistic Data Association. The achieved results foster the implementation of the sensor fusion methodology in forthcoming Intelligent Transportation Systems
Multisensor Data Fusion Strategies for Advanced Driver Assistance Systems
Multisensor data fusion and integration is a rapidly evolving research area that requires interdisciplinary knowledge in control theory, signal processing, artificial intelligence, probability and statistics, etc. Multisensor data fusion refers to the synergistic combination of sensory data from multiple sensors and related information to provide more reliable and accurate information than could be achieved using a single, independent sensor (Luo et al., 2007). Actually Multisensor data fusion is a multilevel, multifaceted process dealing with automatic detection, association, correlation, estimation, and combination of data from single and multiple information sources. The results of data fusion process help users make decisions in complicated scenarios. Integration of multiple sensor data was originally needed for military applications in ocean surveillance, air-to air and surface-to-air defence, or battlefield intelligence. More recently, multisensor data fusion has also included the nonmilitary fields of remote environmental sensing, medical diagnosis, automated monitoring of equipment, robotics, and automotive systems (Macci et al., 2008). The potential advantages of multisensor fusion and integration are redundancy, complementarity, timeliness, and cost of the information. The integration or fusion of redundant information can reduce overall uncertainty and thus serve to increase the accuracy with which the features are perceived by the system. Multiple sensors providing redundant information can also serve to increase reliability in the case of sensor error or failure. Complementary information from multiple sensors allows features in the environment to be perceived that are impossible to perceive using just the information from each individual sensor operating separately. (Luo et al., 2007) Besides, driving as one of our daily activities is a complex task involving a great amount of interaction between driver and vehicle. Drivers regularly share their attention among operating the vehicle, monitoring traffic and nearby obstacles, and performing secondary tasks such as conversing, adjusting comfort settings (e.g. temperature, radio.) The complexity of the task and uncertainty of the driving environment make driving a very dangerous task, as according to a study in the European member states, there are more than 1,200,000 traffic accidents a year with over 40,000 fatalities. This fact points up the growing demand for automotive safety systems, which aim for a significant contribution to the overall road safety (Tatschke et al., 2006). Therefore, recently, there are an increased number of research activities focusing on the Driver Assistance System (DAS) development in order O pe n A cc es s D at ab as e w w w .in te ch w eb .o r
Intelligent automatic overtaking system using vision for vehicle detection
There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results
Data Fusion for Overtaking Vehicle Detection Based on Radar and Optical Flow
Trustworthiness is a key point when dealing with vehicle safety applications. In this paper an approach to a real application is presented, able to fulfill the requirements of such demanding applications. Most of commercial sensors available nowadays are usually designed to detect front vehicles but lack the ability to detect overtaking vehicles. The work presented here combines the information provided by two sensors, a Stop&Go radar and a camera. Fusion is done by using the unprocessed information from the radar and computer vision based on optical flow. The basic capabilities of the commercial systems are upgraded giving the possibility to improve the front vehicles detection system, by detecting overtaking vehicles with a high positive rate.This work was supported by the Spanish Government
through the Cicyt projects FEDORA (GRANT TRA2010-
20225-C03-01) and D3System (TRA2011-29454-C03-02).
BRAiVE prototype has been developed in the framework
of the Open intelligent systems for Future Autonomous
Vehicles (OFAV) Projects funded by the European Research
Council (ERC) within an Advanced Investigation Gran
Recommended from our members
An evaluation framework for stereo-based driver assistance
This is the post-print version of the Article - Copyright @ 2012 Springer VerlagThe accuracy of stereo algorithms or optical flow methods is commonly assessed by comparing the results against the Middlebury
database. However, equivalent data for automotive or robotics applications
rarely exist as they are difficult to obtain. As our main contribution, we introduce an evaluation framework tailored for stereo-based driver assistance able to deliver excellent performance measures while
circumventing manual label effort. Within this framework one can combine several ways of ground-truthing, different comparison metrics, and use large image databases.
Using our framework we show examples on several types of ground truthing techniques: implicit ground truthing (e.g. sequence recorded without a crash occurred), robotic vehicles with high precision sensors, and to a small extent, manual labeling. To show the effectiveness of our evaluation framework we compare three different stereo algorithms on
pixel and object level. In more detail we evaluate an intermediate representation
called the Stixel World. Besides evaluating the accuracy of the Stixels, we investigate the completeness (equivalent to the detection rate) of the StixelWorld vs. the number of phantom Stixels. Among many findings, using this framework enables us to reduce the number of phantom Stixels by a factor of three compared to the base parametrization. This base parametrization has already been optimized by test driving vehicles for distances exceeding 10000 km
A connected autonomous vehicle testbed: Capabilities, experimental processes and lessons learned
VENTURER was one of the first three UK government funded research and innovation projects on Connected Autonomous Vehicles (CAVs) and was conducted predominantly in the South West region of the country. A series of increasingly complex scenarios conducted in an urban setting were used to: (i) evaluate the technology created as a part of the project; (ii) systematically assess participant responses to CAVs and; (iii) inform the development of potential insurance models and legal frameworks. Developing this understanding contributed key steps towards facilitating the deployment of CAVs on UK roads. This paper aims to describe the VENTURER Project trials, their objectives and detail some of the key technologies used. Importantly we aim to introduce some informative challenges that were overcame and the subsequent project and technological lessons learned in a hope to help others plan and execute future CAV research. The project successfully integrated several technologies crucial to CAV development. These included, a Decision Making System using behaviour trees to make high level decisions; A pilot-control system to smoothly and comfortably turn plans into throttle and steering actuation; Sensing and perception systems to make sense of raw sensor data; Inter-CAV Wireless communication capable of demonstrating vehicle-to-vehicle communication of potential hazards. The closely coupled technology integration, testing and participant-focused trial schedule led to a greatly improved understanding of the engineering and societal barriers that CAV development faces. From a behavioural standpoint the importance of reliability and repeatability far outweighs a need for novel trajectories, while the sensor-to-perception capabilities are critical, the process of verification and validation is extremely time consuming. Additionally, the added capabilities that can be leveraged from inter-CAV communications shows the potential for improved road safety that could result. Importantly, to effectively conduct human factors experiments in the CAV sector under consistent and repeatable conditions, one needs to define a scripted and stable set of scenarios that uses reliable equipment and a controllable environmental setting. This requirement can often be at odds with making significant technology developments, and if both are part of a project’s goals then they may need to be separated from each other
er.autopilot 1.0: The Full Autonomous Stack for Oval Racing at High Speeds
The Indy Autonomous Challenge (IAC) brought together for the first time in
history nine autonomous racing teams competing at unprecedented speed and in
head-to-head scenario, using independently developed software on open-wheel
racecars. This paper presents the complete software architecture used by team
TII EuroRacing (TII-ER), covering all the modules needed to avoid static
obstacles, perform active overtakes and reach speeds above 75 m/s (270 km/h).
In addition to the most common modules related to perception, planning, and
control, we discuss the approaches used for vehicle dynamics modelling,
simulation, telemetry, and safety. Overall results and the performance of each
module are described, as well as the lessons learned during the first two
events of the competition on oval tracks, where the team placed respectively
second and third.Comment: Preprint: Accepted to Field Robotics "Opportunities and Challenges
with Autonomous Racing" Special Issu
- …