27,080 research outputs found

    Robust Multiple Lane Road Modeling Based on Perspective Analysis

    Get PDF
    Road modeling is the first step towards environment perception within driver assistance video-based systems. Typically, lane modeling allows applications such as lane departure warning or lane invasion by other vehicles. In this paper, a new monocular image processing strategy that achieves a robust multiple lane model is proposed. The identification of multiple lanes is done by firstly detecting the own lane and estimating its geometry under perspective distortion. The perspective analysis and curve fitting allows to hypothesize adjacent lanes assuming some a priori knowledge about the road. The verification of these hypotheses is carried out by a confidence level analysis. Several types of sequences have been tested, with different illumination conditions, presence of shadows and significant curvature, all performing in realtime. Results show the robustness of the system, delivering accurate multiple lane road models in most situations

    On-Board Video Based System for Robust Road Modeling

    Get PDF
    In this paper, a novel road modeling strategy is proposed, defining an accurate and robust system that operates in realtime. The strategy aims to find a trade-off between computational requirements of real systems and accuracy and robustness of the results. The basis of the strategy is an adaptive road segmentation technique which ensures robust detections of lane markings and vehicles. A multiple lane model of the road is obtained by asserting hypotheses of lanes geometry based on perspective analysis and stochastic filtering. This multiple lane approach significantly improves vehicle location compared to other video-based works, as detected vehicles are accurately located within lanes. Tests show the adaptability, robustness and accuracy of the system in daylight situations with severe illumination changes, non-homogeneous color of the pavement of the road, lane markings occlusions, shadows, variable traffic conditions, etc., performing in real-time in all cases

    Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

    Full text link
    An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multilane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles

    Video analysis based vehicle detection and tracking using an MCMC sampling framework

    Full text link
    This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences
    corecore