374 research outputs found

    Vision-Based Road Detection in Automotive Systems: A Real-Time Expectation-Driven Approach

    Full text link
    The main aim of this work is the development of a vision-based road detection system fast enough to cope with the difficult real-time constraints imposed by moving vehicle applications. The hardware platform, a special-purpose massively parallel system, has been chosen to minimize system production and operational costs. This paper presents a novel approach to expectation-driven low-level image segmentation, which can be mapped naturally onto mesh-connected massively parallel SIMD architectures capable of handling hierarchical data structures. The input image is assumed to contain a distorted version of a given template; a multiresolution stretching process is used to reshape the original template in accordance with the acquired image content, minimizing a potential function. The distorted template is the process output.Comment: See http://www.jair.org/ for any accompanying file

    A monocular color vision system for road intersection detection

    Get PDF
    Urban driving has become the focus of autonomous robotics in recent years. Many groups seek to benefit from the research in this field including the military, who hopes to deploy autonomous rescue forces to battle-torn cities, and consumers, who will benefit from the safety and convenience resulting from new technologies finding purpose in consumer automobiles. One key aspect of autonomous urban driving is localization, or the ability of the robot to determine its position on a road network. Any information that can be obtained for the surrounding area including stop signs, road lines, and intersecting roads can aid this localization. The work here attempts to combine some previously established computer vision methods to identify roads and develop a new method that can identify both the road and any possible intersecting roads present in front of a vehicle using a single color camera. Computer vision systems rely on a few basic methods to understand and identify what they are looking at. Two valuable methods are the detection of edges that are present in the image and analysis of the colors that compose the image. The method described here attempts to utilize edge information to find road lines and color information to find the road area and any similarly colored intersecting roads. This work demonstrates that combining edge detection and color analysis methods utilizes their strengths and accommodates for their weaknesses and allows for a method that can successfully detect road lanes and intersecting roads at speeds fast enough for use with autonomous urban driving

    Automated Visual Database Creation For A Ground Vehicle Simulator

    Get PDF
    This research focuses on extracting road models from stereo video sequences taken from a moving vehicle. The proposed method combines color histogram based segmentation, active contours (snakes) and morphological processing to extract road boundary coordinates for conversion into Matlab or Multigen OpenFlight compatible polygonal representations. Color segmentation uses an initial truth frame to develop a color probability density function (PDF) of the road versus the terrain. Subsequent frames are segmented using a Maximum Apostiori Probability (MAP) criteria and the resulting templates are used to update the PDFs. Color segmentation worked well where there was minimal shadowing and occlusion by other cars. A snake algorithm was used to find the road edges which were converted to 3D coordinates using stereo disparity and vehicle position information. The resulting 3D road models were accurate to within 1 meter

    Perception advances in outdoor vehicle detection for automatic cruise control

    Get PDF
    This paper describes a vehicle detection system based on support vector machine (SVM) and monocular vision. The final goal is to provide vehicle-to-vehicle time gap for automatic cruise control (ACC) applications in the framework of intelligent transportation systems (ITS). The challenge is to use a single camera as input, in order to achieve a low cost final system that meets the requirements needed to undertake serial production in automotive industry. The basic feature of the detected objects are first located in the image using vision and then combined with a SVMbased classifier. An intelligent learning approach is proposed in order to better deal with objects variability, illumination conditions, partial occlusions and rotations. A large database containing thousands of object examples extracted from real road scenes has been created for learning purposes. The classifier is trained using SVM in order to be able to classify vehicles, including trucks. In addition, the vehicle detection system described in this paper provides early detection of passing cars and assigns lane to target vehicles. In the paper, we present and discuss the results achieved up to date in real traffic conditions.Ministerio de Educación y Cienci

    Uses and Challenges of Collecting LiDAR Data from a Growing Autonomous Vehicle Fleet: Implications for Infrastructure Planning and Inspection Practices

    Get PDF
    Autonomous vehicles (AVs) that utilize LiDAR (Light Detection and Ranging) and other sensing technologies are becoming an inevitable part of transportation industry. Concurrently, transportation agencies are increasingly challenged with the management and tracking of large-scale highway asset inventory. LiDAR has become popular among transportation agencies for highway asset management given its advantage over traditional surveying methods. The affordability of LiDAR technology is increasing day by day. Given this, there will be substantial challenges and opportunities for the utilization of big data resulting from the growth of AVs with LiDAR. A proper understanding of the data size generated from this technology will help agencies in making decisions regarding storage, management, and transmission of the data. The original raw data generated from the sensor shrinks a lot after filtering and processing following the Cache county Road Manual and storing into ASPRS recommended (.las) file format. In this pilot study, it is found that while considering the road centerline as the vehicle trajectory larger portion of the data fall into the right of way section compared to the actual vehicle trajectory in Cache County, UT. And there is a positive relation between the data size and vehicle speed in terms of the travel lanes section given the nature of the selected highway environment

    Visual computing techniques for automated LIDAR annotation with application to intelligent transport systems

    Get PDF
    106 p.The concept of Intelligent Transport Systems (ITS) refers to the application of communication and information technologies to transport with the aim of making it more efficient, sustainable, and safer. Computer vision is increasingly being used for ITS applications, such as infrastructure management or advanced driver-assistance systems. The latest progress in computer vision, thanks to the Deep Learning techniques, and the race for autonomous vehicle, have created a growing requirement for annotated data in the automotive industry. The data to be annotated is composed by images captured by the cameras of the vehicles and LIDAR data in the form of point clouds. LIDAR sensors are used for tasks such as object detection and localization. The capacity of LIDAR sensors to identify objects at long distances and to provide estimations of their distance make them very appealing sensors for autonomous driving.This thesis presents a method to automate the annotation of lane markings with LIDAR data. The state of the art of lane markings detection based on LIDAR data is reviewed and a novel method is presented. The precision of the method is evaluated against manually annotated data. Its usefulness is also evaluated, measuring the reduction of the required time to annotate new data thanks to the automatically generated pre-annotations. Finally, the conclusions of this thesis and possible future research lines are presented

    On the Use of Low-Cost RGB-D Sensors for Autonomous Pothole Detection with Spatial Fuzzy <em>c</em>-Means Segmentation

    Get PDF
    The automated detection of pavement distress from remote sensing imagery is a promising but challenging task due to the complex structure of pavement surfaces, in addition to the intensity of non-uniformity, and the presence of artifacts and noise. Even though imaging and sensing systems such as high-resolution RGB cameras, stereovision imaging, LiDAR and terrestrial laser scanning can now be combined to collect pavement condition data, the data obtained by these sensors are expensive and require specially equipped vehicles and processing. This hinders the utilization of the potential efficiency and effectiveness of such sensor systems. This chapter presents the potentials of the use of the Kinect v2.0 RGB-D sensor, as a low-cost approach for the efficient and accurate pothole detection on asphalt pavements. By using spatial fuzzy c-means (SFCM) clustering, so as to incorporate the pothole neighborhood spatial information into the membership function for clustering, the RGB data are segmented into pothole and non-pothole objects. The results demonstrate the advantage of complementary processing of low-cost multisensor data, through channeling data streams and linking data processing according to the merits of the individual sensors, for autonomous cost-effective assessment of road-surface conditions using remote sensing technology

    Visual computing techniques for automated LIDAR annotation with application to intelligent transport systems

    Get PDF
    106 p.The concept of Intelligent Transport Systems (ITS) refers to the application of communication and information technologies to transport with the aim of making it more efficient, sustainable, and safer. Computer vision is increasingly being used for ITS applications, such as infrastructure management or advanced driver-assistance systems. The latest progress in computer vision, thanks to the Deep Learning techniques, and the race for autonomous vehicle, have created a growing requirement for annotated data in the automotive industry. The data to be annotated is composed by images captured by the cameras of the vehicles and LIDAR data in the form of point clouds. LIDAR sensors are used for tasks such as object detection and localization. The capacity of LIDAR sensors to identify objects at long distances and to provide estimations of their distance make them very appealing sensors for autonomous driving.This thesis presents a method to automate the annotation of lane markings with LIDAR data. The state of the art of lane markings detection based on LIDAR data is reviewed and a novel method is presented. The precision of the method is evaluated against manually annotated data. Its usefulness is also evaluated, measuring the reduction of the required time to annotate new data thanks to the automatically generated pre-annotations. Finally, the conclusions of this thesis and possible future research lines are presented

    Lane Detection System for Intelligent Vehicles using Lateral Fisheye Cameras

    Get PDF
    The need for safety on roads has made the development of autonomous driving one of the most important topics for Computer Vision research. This thesis focuses on the lane detection problem using images obtained with lateral fisheye cameras, firstly by studying the state-of-the-art and the spherical model, then by developing two methods to solve this task. While the first is based on traditional Computer Vision, the second makes use of a Convolutional Neural Network. Results are then compared
    corecore