361 research outputs found

    A Three Resolution Framework for Reliable Road Obstacle Detection using Stereovision

    Get PDF
    International audienceMany approaches have been proposed for in-vehicle obstacle detection using stereovision. Unfortunately, computation cost is generally a limiting factor for all these methods, especially for systems using large base-lines, as they need to explore a wide range of disparities. Considering this point, we propose a reliable three resolution framework, designed for real time operation, even with high resolution images and a large baseline

    Fusion Based Safety Application for Pedestrian Detection with Danger Estimation

    Get PDF
    Proceedings of: 14th International Conference on Information Fusion (FUSION 2011). Chicago, Illinois, USA 5-8 July 2011.Road safety applications require the most reliable data. In recent years data fusion is becoming one of the main technologies for Advance Driver Assistant Systems (ADAS) to overcome the limitations of isolated use of the available sensors and to fulfil demanding safety requirements. In this paper a real application of data fusion for road safety for pedestrian detection is presented. Two sets of automobile-emplaced sensors are used to detect pedestrians in urban environments, a laser scanner and a stereovision system. Both systems are mounted in the automobile research platform IVVI 2.0 to test the algorithms in real situations. The different safety issues necessary to develop this fusion application are described. Context information such as velocity and GPS information is also used to provide danger estimation for the detected pedestrians.This work was supported by the Spanish Government through the Cicyt projects FEDORA (GRANT TRA2010- 20225-C03-01 ) , VIDAS-Driver (GRANT TRA2010-21371-C03-02 ).Publicad

    Intelligent imaging systems for automotive applications

    Get PDF
    In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issue

    Sensor fusion methodology for vehicle detection

    Get PDF
    A novel sensor fusion methodology is presented, which provides intelligent vehicles with augmented environment information and knowledge, enabled by vision-based system, laser sensor and global positioning system. The presented approach achieves safer roads by data fusion techniques, especially in single-lane carriage-ways where casualties are higher than in other road classes, and focuses on the interplay between vehicle drivers and intelligent vehicles. The system is based on the reliability of laser scanner for obstacle detection, the use of camera based identification techniques and advanced tracking and data association algorithms i.e. Unscented Kalman Filter and Joint Probabilistic Data Association. The achieved results foster the implementation of the sensor fusion methodology in forthcoming Intelligent Transportation Systems

    Integrated Stereovision for an Autonomous Ground Vehicle Competing in the Darpa Grand Challenge

    Get PDF
    The DARPA Grand Challenge (DGC) 2005 was a competition, in form of a desert race for autonomous ground vehicles, arranged by the U.S. Defense Advanced Research Project Agency (DARPA). The purpose was to encourage research and development of related technology. The objective of the race was to track a distance of 131.6 miles in less than 10 hours without any human interaction. Only public GPS signals and terrain sensors were allowed for navigation and obstacle detection. One of the teams competing in the DGC was Team Caltech from California Institute of Technology, consisting primarily of undergraduate students. The vehicle representing Team Caltech was a 2005 Ford E-350 van, named Alice. Alice had been modified for off-road driving and equipped with multiple sensors, computers and actuators. One type of terrain sensors used on Alice was stereovision. Two camera pairs were used for short and long range obstacle detection. This master thesis concerns development, testing and integration of stereovision sensors during the final four months leading to the race. To begin with, the stereovision system on Alice was not ready to use and had not undergone any testing. The work described in this thesis enabled operation of stereovision. It further improved its capability such that it increased the overall performance of Alice. Reliability was demonstrated through multiple desert field tests. Obstacle avoidance and navigation using only stereovision was successfully demonstrated. The completed work includes design and implementation of algorithms to improve camera focus and exposure control, increase processing speed and remove noise. Also hardware and software parameters were configured to achieve best possible operation. Alice managed to qualify to the race as one of the top ten vehicles. However she was only able to complete about 8 miles before running over a concrete barrier and out of the course, as a result of hardware failures and state estimation errors

    Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System

    Get PDF
    Az http://intechweb.org/ alatti "Books" fül alatt kell rákeresni a "Stereo Vision" címre és az 1. fejezetre

    Vehicle recognition and tracking using a generic multi-sensor and multi-algorithm fusion approach

    Get PDF
    International audienceThis paper tackles the problem of improving the robustness of vehicle detection for Adaptive Cruise Control (ACC) applications. Our approach is based on a multisensor and a multialgorithms data fusion for vehicle detection and recognition. Our architecture combines two sensors: a frontal camera and a laser scanner. The improvement of the robustness stems from two aspects. First, we addressed the vision-based detection by developing an original approach based on fine gradient analysis, enhanced with a genetic AdaBoost-based algorithm for vehicle recognition. Then, we use the theory of evidence as a fusion framework to combine confidence levels delivered by the algorithms in order to improve the classification 'vehicle versus non-vehicle'. The final architecture of the system is very modular, generic and flexible in that it could be used for other detection applications or using other sensors or algorithms providing the same outputs. The system was successfully implemented on a prototype vehicle and was evaluated under real conditions and over various multisensor databases and various test scenarios, illustrating very good performances

    Augmented Perception for Agricultural Robots Navigation

    Full text link
    [EN] Producing food in a sustainable way is becoming very challenging today due to the lack of skilled labor, the unaffordable costs of labor when available, and the limited returns for growers as a result of low produce prices demanded by big supermarket chains in contrast to ever-increasing costs of inputs such as fuel, chemicals, seeds, or water. Robotics emerges as a technological advance that can counterweight some of these challenges, mainly in industrialized countries. However, the deployment of autonomous machines in open environments exposed to uncertainty and harsh ambient conditions poses an important defiance to reliability and safety. Consequently, a deep parametrization of the working environment in real time is necessary to achieve autonomous navigation. This article proposes a navigation strategy for guiding a robot along vineyard rows for field monitoring. Given that global positioning cannot be granted permanently in any vineyard, the strategy is based on local perception, and results from fusing three complementary technologies: 3D vision, lidar, and ultrasonics. Several perception-based navigation algorithms were developed between 2015 and 2019. After their comparison in real environments and conditions, results showed that the augmented perception derived from combining these three technologies provides a consistent basis for outlining the intelligent behavior of agricultural robots operating within orchards.This work was supported by the European Union Research and Innovation Programs under Grant N. 737669 and Grant N. 610953. The associate editor coordinating the review of this article and approving it for publication was Dr. Oleg Sergiyenko.Rovira Más, F.; Sáiz Rubio, V.; Cuenca-Cuenca, A. (2021). Augmented Perception for Agricultural Robots Navigation. IEEE Sensors Journal. 21(10):11712-11727. https://doi.org/10.1109/JSEN.2020.3016081S1171211727211
    • …
    corecore