75,259 research outputs found

    Review of Environment Perception for Intelligent Vehicles

    Get PDF
    Overview of environment perception for intelligent vehicles supposes to the state-of-the-art algorithms and modeling methods are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. Integrated lane and vehicle tracking for driver assistance system that improves on the performance of both lane tracking and vehicle tracking modules. Without specific hardware and software optimizations, the fully implemented system runs at near-real-time speeds of 11 frames per second. On-road vision-based vehicle detection, tracking, and behavior understanding. Vision based vehicle detection in the context of sensor-based on-road surround analysis. We detail advances in vehicle detection, discussing monocular, stereo vision, and active sensor–vision fusion for on-road vehicle detection. The traffic sign detection detailing detection systems for traffic sign recognition (TSR) for driver assistance. Inherently in traffic sign detection to the various stages: segmentation, feature extraction, and final sign detection

    Stereoscopic vision in vehicle navigation.

    Get PDF
    Traffic sign (TS) detection and tracking is one of the main tasks of an autonomous vehicle which is addressed in the field of computer vision. An autonomous vehicle must have vision based recognition of the road to follow the rules like every other vehicle on the road. Besides, TS detection and tracking can be used to give feedbacks to the driver. This can significantly increase safety in making driving decisions. For a successful TS detection and tracking changes in weather and lighting conditions should be considered. Also, the camera is in motion, which results in image distortion and motion blur. In this work a fast and robust method is proposed for tracking the stop signs in videos taken with stereoscopic cameras that are mounted on the car. Using camera parameters and the detected sign, the distance between the stop sign and the vehicle is calculated. This calculated distance can be widely used in building visual driver-assistance systems

    Joint interpretation of on-board vision and static GPS cartography for determination of correct speed limit

    Get PDF
    We present here a first prototype of a "Speed Limit Support" Advance Driving Assistance System (ADAS) producing permanent reliable information on the current speed limit applicable to the vehicle. Such a module can be used either for information of the driver, or could even serve for automatic setting of the maximum speed of a smart Adaptive Cruise Control (ACC). Our system is based on a joint interpretation of cartographic information (for static reference information) with on-board vision, used for traffic sign detection and recognition (including supplementary sub-signs) and visual road lines localization (for detection of lane changes). The visual traffic sign detection part is quite robust (90% global correct detection and recognition for main speed signs, and 80% for exit-lane sub-signs detection). Our approach for joint interpretation with cartography is original, and logic-based rather than probability-based, which allows correct behaviour even in cases, which do happen, when both vision and cartography may provide the same erroneous information

    Road traffic sign detection and classification

    Get PDF
    A vision-based vehicle guidance system for road vehicles can have three main roles: (1) road detection; (2) obstacle detection; and (3) sign recognition. The first two have been studied for many years and with many good results, but traffic sign recognition is a less-studied field. Traffic signs provide drivers with very valuable information about the road, in order to make driving safer and easier. The authors think that traffic signs most play the same role for autonomous vehicles. They are designed to be easily recognized by human drivers mainly because their color and shapes are very different from natural environments. The algorithm described in this paper takes advantage of these features. It has two main parts. The first one, for the detection, uses color thresholding to segment the image and shape analysis to detect the signs. The second one, for the classification, uses a neural network. Some results from natural scenes are shown.Publicad

    Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    Get PDF
    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance

    Fast traffic sign recognition using color segmentation and deep convolutional networks

    Get PDF
    The use of Computer Vision techniques for the automatic recognition of road signs is fundamental for the development of intelli- gent vehicles and advanced driver assistance systems. In this paper, we describe a procedure based on color segmentation, Histogram of Ori- ented Gradients (HOG), and Convolutional Neural Networks (CNN) for detecting and classifying road signs. Detection is speeded up by a pre- processing step to reduce the search space, while classication is carried out by using a Deep Learning technique. A quantitative evaluation of the proposed approach has been conducted on the well-known German Traf- c Sign data set and on the novel Data set of Italian Trac Signs (DITS), which is publicly available and contains challenging sequences captured in adverse weather conditions and in an urban scenario at night-time. Experimental results demonstrate the eectiveness of the proposed ap- proach in terms of both classication accuracy and computational speed

    Erratum to: Mobile system for road sign detection and recognition with template matching

    Get PDF
    This paper explores the effective approach to road sign detection and recognition based on mobile devices. Detecting and recognising road signs is a challenging matter because of different shapes, complex background and irregular sign illumination. The main goal of the system is to assist drivers by warning them about the existence of road signs to increase safety during driving. In this paper, the system for detection and recognition of road signs was implemented and tested with the use of Open Source Computer Vision Library (OpenCV). The system consists of two parts. The first part is the detection stage, which is used to detect the signs from the whole image frame and includes the modules: data-image acquisition, image pre-processing and sign detection. During this stage, the impact of Canny edge detector and Hough transform parameters on the quality-level of sign detection was tested. The second part is the recognition stage, whose role is to match the detected object with a priori models of signs in the dataset. In the research, the authors also compared the influence of various image processing algorithms parameters to the time of road sign recognition. The discussion part answers also the question whether the mobile system (smartphone) is robust enough to detect and recognise road sings in real time

    Defining Traffic Scenarios for the Visually Impaired

    Get PDF
    For the development of a transfer concept of camera-based object detections from Advanced Driver Assistance Systems to the assistance of the visually impaired, we define relevant traffic scenarios and vision use cases by means of problem-centered interviews with four experts and ten members of the target group. We identify the six traffic scenarios: general orientation, navigating to an address, crossing a road, obstacle avoidance, boarding a bus, and at the train station clustered into the three categories: Orientation, Pedestrian, and Public Transport. Based on the data, we describe each traffic scenario and derive a summarizing table adapted from software engineering resulting in a collection of vision use cases. The ones that are also of interest in Advanced Driver Assistance Systems – Bicycle, Crosswalk, Traffic Sign, Traffic Light (State), Driving Vehicle, Obstacle, and Lane Detection – build the foundation of our future work. Furthermore, we present social insights that we gained from the interviews and discuss the indications we gather by considering the importance of the identified use cases for each interviewed member of the target group
    corecore