120 research outputs found

    Road edge and lane boundary detection using laser and vision

    Full text link
    This paper presents a methodology for extracting road edge and lane information for smart and intelligent navigation of vehicles. The range information provided by a fast laser range-measuring device is processed by an extended Kalman filter to extract the road edge or curb information. The resultant road edge information is used to aid in the extraction of the lane boundary from a CCD camera image. Hough Transform (HT) is used to extract the candidate lane boundary edges, and the most probable lane boundary is determined using an Active Line Model based on minimizing an appropriate Energy function. Experimental results are presented to demonstrate the effectiveness of the combined Laser and Vision strategy for road-edge and lane boundary detection

    Road curb and intersection detection using A 2D LMS

    Full text link
    In most urban roads, and similar environments such as in theme parks, campus sites, industrial estates, science parks and the like, the painted lane markings that exist may not be easily discernible by CCD cameras due to poor lighting, bad weather conditions, and inadequate maintenance. An important feature of roads in such environments is the existence of pavements or curbs on either side defining the road boundaries. These curbs, which are mostly parallel to the road, can be hardnessed to extract useful features of the road for implementing autonomous navigation or driver assistance systems. However, extraction of the curb or road edge feature using vision image data is a very formidable task as the curb is not conspicuous in the vision image. To extract the curb using vision data requires extensive image processing, heuristics and very favorable ambient lighting. In our approach, road curbs are extracted speedily using range data provided by a 2D Laser range Measurement System (LMS). Experimental results are presented to demonstrate the viability, and effectiveness, of the proposed methodology and its robustness to different road configurations including road intersections

    Laser-camera composite sensing for road detection and tracing

    Full text link
    An important feature in most urban roads and similar environments, such as in theme parks, campus sites, industrial estates, science parks, and the like, is the existence of pavements or curbs on either side de?ning the road boundaries. These curbs, which are mostly parallel to the road, can be harnessed to extract useful features of the road for implementing autonomous navigation or driver assistance systems. However, vision-alone methods for extraction of such curbs or road edge features with accurate depth information is a formidable task, as the curb is not conspicuous in the vision image and also requires the use of stereo images. Further, bad lighting, adverse weather conditions, nonlinear lens aberrations, or lens glare due to sun and other bright light sources can severely impair the road image quality and thus the operation of vision-alone methods. In this paper an alternative and novel approach involving the fusion of 2D laser range and monochrome vision image data is proposed to improve the robustness and reliability. Experimental results are presented to demonstrate the viability and effectiveness of the proposed methodology and its robustness to different road configurations and shadows

    A Bayesian fusion model for space-time reconstruction of finely resolved velocities in turbulent flows from low resolution measurements

    Full text link
    The study of turbulent flows calls for measurements with high resolution both in space and in time. We propose a new approach to reconstruct High-Temporal-High-Spatial resolution velocity fields by combining two sources of information that are well-resolved either in space or in time, the Low-Temporal-High-Spatial (LTHS) and the High-Temporal-Low-Spatial (HTLS) resolution measurements. In the framework of co-conception between sensing and data post-processing, this work extensively investigates a Bayesian reconstruction approach using a simulated database. A Bayesian fusion model is developed to solve the inverse problem of data reconstruction. The model uses a Maximum A Posteriori estimate, which yields the most probable field knowing the measurements. The DNS of a wall-bounded turbulent flow at moderate Reynolds number is used to validate and assess the performances of the present approach. Low resolution measurements are subsampled in time and space from the fully resolved data. Reconstructed velocities are compared to the reference DNS to estimate the reconstruction errors. The model is compared to other conventional methods such as Linear Stochastic Estimation and cubic spline interpolation. Results show the superior accuracy of the proposed method in all configurations. Further investigations of model performances on various range of scales demonstrate its robustness. Numerical experiments also permit to estimate the expected maximum information level corresponding to limitations of experimental instruments.Comment: 15 pages, 6 figure

    Road curb tracking in an urban environment

    Full text link
    Road detection and tracking is very useful in the synthesis of driver assistance and intelligent transportation systems. In this paper a methodology is proposed based on the extended Kalman filer for robust road curb detection and tracking using a combination of onboard active and passive sensors. The problem is formulated as detecting and tracking a maneuvering target in clutter using onboard sensors on a moving platform. The primary sensors utilized are a 2 dimensional SICK laser scanner, five encoders and a gyroscope, together with an image sensor (CCD camera). Compared to the active 20 laser scanner the CCD camera is capable of providing observations over an extended horizon, thus making available much useful information about the curb trend, which is exploited in mainly the laser based tracking algorithm. The advantage of the proposed image enhanced laser detection/tracking method, over laser alone detection/tracking, is illustrated using simulations and its robustness to varied road curvatures, branching, turns and scenarios, is demonstrated through experimental results. © 2003 ISlF

    Bayesian Road Estimation Using Onboard Sensors

    Get PDF
    This paper describes an algorithm for estimating the road ahead of a host vehicle based on the measurements from several onboard sensors: a camera, a radar, wheel speed sensors,and an inertial measurement unit.We propose a novel road model that is able to describe the road ahead with higher accuracy than the usual polynomial model. We also develop a Bayesian fusionsystem that uses the following information from the surroundings: lane marking measurements obtained by the camera and leading vehicle and stationary object measurements obtained bya radar–camera fusion system. The performance of our fusion algorithm is evaluated in several drive tests. As expected, the more information we use, the better the performance is.Index Terms—Camera, information fusion, radar, road geometry,unscented Kalman filter (UKF)

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Multisource Data Integration in Remote Sensing

    Get PDF
    Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system
    • …
    corecore