102 research outputs found

    Road environment modeling using robust perspective analysis and recursive Bayesian segmentation

    Get PDF
    Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios

    ObjectFlow: A Descriptor for Classifying Traffic Motion

    Get PDF
    Abstract—We present and evaluate a novel scene descriptor for classifying urban traffic by object motion. Atomic 3D flow vectors are extracted and compensated for the vehicle’s egomo-tion, using stereo video sequences. Votes cast by each flow vector are accumulated in a bird’s eye view histogram grid. Since we are directly using low-level object flow, no prior object detection or tracking is needed. We demonstrate the effectiveness of the proposed descriptor by comparing it to two simpler baselines on the task of classifying more than 100 challenging video sequences into intersection and non-intersection scenarios. Our experiments reveal good classification performance in busy traffic situations, making our method a valuable complement to traditional approaches based on lane markings. I

    A Vision Based Lane Marking Detection, Tracking and Vehicle Detection on Highways

    Get PDF
    Changing street conditions is an important issue in the applications in mechanized route of vehicles essentially because of vast change in appearance in lane markings on by variables such substantial movement and changing daylight conditions of the specific time of day. A path identification framework is an imperative segment of numerous computerized vehicle frameworks. In this paper, we address these issues through lane identification and vehicle recognition calculation to manage testing situations, for example, a lane end and flow, old lane markings, and path changes. Left and right lane limits will be distinguished independently to adequately handle blending and part paths utilizing a strong calculation. Vehicle discovery is another issue in computerized route of vehicles. Different vehicle discovery approaches have been actualized yet it is hard to locate a quick and trusty calculation for applications, for example, for vehicle crashing (hitting) cautioning or path evolving system .Vision-based vehicle recognition can likewise enhance the crash cautioning execution when it is consolidated with a lane marking identification calculation. In crash cautioning applications, it is vital to know whether the obstruction is in the same path with the sense of self vehicle or not

    Sensor fusion methodology for vehicle detection

    Get PDF
    A novel sensor fusion methodology is presented, which provides intelligent vehicles with augmented environment information and knowledge, enabled by vision-based system, laser sensor and global positioning system. The presented approach achieves safer roads by data fusion techniques, especially in single-lane carriage-ways where casualties are higher than in other road classes, and focuses on the interplay between vehicle drivers and intelligent vehicles. The system is based on the reliability of laser scanner for obstacle detection, the use of camera based identification techniques and advanced tracking and data association algorithms i.e. Unscented Kalman Filter and Joint Probabilistic Data Association. The achieved results foster the implementation of the sensor fusion methodology in forthcoming Intelligent Transportation Systems

    다중 보행자 인지를 위한 센서 융합 알고리즘 개발

    Get PDF
    학위논문(석사)--서울대학교 대학원 :공과대학 기계항공공학부,2019. 8. 이경수.환경 센서를 이용하여 보행자를 인지하고 추적하는 알고리즘은 안전한 도심 자율주행을 위해 가장 중요한 기술 중 하나이다. 본 논문은 상업용 비전 센서, 라이다 센서, 그리고 디지털 지도 정보를 융합해 보행자를 추적하는 새로운 알고리즘을 제시한다. 상업용 비전 센서는 보행자를 효과적으로 탐지하는 반면 라이다 센서는 거리를 정확하게 측정한다. 본 시스템은 상업용 비전 센서를 이용해 보행자를 탐지하며, 라이다 센서를 이용하여 상태 추정 성능을 향상시켰다. 또한 디지털 지도를 이용해 라이다 센서의 관심 영역을 설정하였다. 탐지 결과는 서울대학교 캠퍼스에서 약 4600프레임 주행 데이터로, 추정의 정확성은 주행 실험을 통해 검증하여 복잡한 도심 주행 상황에서도 본 알고리즘이 유용함을 검증하였다.Pedestrian detection and tracking algorithm using environmental sensors is one of the most fundamental technology for safe urban autonomous driving. This paper presents a novel sensor fusion algorithm for multi pedestrian tracking using commercial vision sensor, LiDAR sensor, and digital HD map. The commercial vision sensor effectively detects pedestrian, whereas LiDAR sensor accurately measures a distance. Our system uses commercial vision sensor as detector and utilize LiDAR sensor to enhance estimation. In addition, digital HD map is utilized to properly define Region of Interest (ROI) of LiDAR sensor point cloud data. The detection performance is validated by about 4600 frames of SNU campus driving data and estimation accuracy is calculated through driving experiment. The proposed algorithm can be utilized for autonomous driving vehicles in various urban driving situationChapter 1 Introduction……………………………………………………1 1.1 Motivation…………………………………………………………1 1.2 Previous Research………………………………………………3 1.3 Contributions……………………………………………………4 1.4 Thesis Outline ……………………………………………………5 Chapter 2 System Architecture ……………………………………………6 2.1 Vehicle Sensor Configuration……………………………………6 2.2 Fusion Architecture………………………………………………8 Chapter 3 Vision Track Management & Filtering………………………9 3.1 Filtering for Target Tracking……………………………………10 3.1.1 Process Model……………………………………………10 3.1.2 Measurement model……………………………………13 3.2 Data Association…………………………………………………14 Chapter 4 Vision Guided LiDAR Track Management & Filtering……15 4.1 Cluster Validation…………………………………………………17 4.2 Filtering for Target Tracking……………………………………18 4.2.1 Process Model…………………………………………18 4.2.2 Measurement model…………………………………18 4.3 Track Management Rule………………………………………19 Chapter 5 Fusion Method…………………………………………………20 5.1 Track Association…………………………………………………20 5.2 State Fusion…………………………………………………………21 Chapter 6 Experimental Result……………………………………………22 6.1 Track Initializing and Association Probability along Longitudinal distance ……………………………………………………………………………………23 6.2 Detection & Association Rate in SNU Campus Driving Data…25 6.3 Error of States……………………………………………………………26 Chapter 7 Conclusion ………………………………………………………28 Bibliography……………………………………………………………………29 국문 초록…………………………………………………………………………32Maste
    corecore