183,120 research outputs found

    Real-time vehicle detection using low-cost sensors

    Get PDF
    Improving road safety and reducing the number of accidents is one of the top priorities for the automotive industry. As human driving behaviour is one of the top causation factors of road accidents, research is working towards removing control from the human driver by automating functions and finally introducing a fully Autonomous Vehicle (AV). A Collision Avoidance System (CAS) is one of the key safety systems for an AV, as it ensures all potential threats ahead of the vehicle are identified and appropriate action is taken. This research focuses on the task of vehicle detection, which is the base of a CAS, and attempts to produce an effective vehicle detector based on the data coming from a low-cost monocular camera. Developing a robust CAS based on low-cost sensor is crucial to bringing the cost of safety systems down and in this way, increase their adoption rate by end users. In this work, detectors are developed based on the two main approaches to vehicle detection using a monocular camera. The first is the traditional image processing approach where visual cues are utilised to generate potential vehicle locations and at a second stage, verify the existence of vehicles in an image. The second approach is based on a Convolutional Neural Network, a computationally expensive method that unifies the detection process in a single pipeline. The goal is to determine which method is more appropriate for real-time applications. Following the first approach, a vehicle detector based on the combination of HOG features and SVM classification is developed. The detector attempts to optimise performance by modifying the detection pipeline and improve run-time performance. For the CNN-based approach, six different network models are developed and trained end to end using collected data, each with a different network structure and parameters, in an attempt to determine which combination produces the best results. The evaluation of the different vehicle detectors produced some interesting findings; the first approach did not manage to produce a working detector, while the CNN-based approach produced a high performing vehicle detector with an 85.87% average precision and a very low miss rate. The detector managed to perform well under different operational environments (motorway, urban and rural roads) and the results were validated using an external dataset. Additional testing of the vehicle detector indicated it is suitable as a base for safety applications such as CAS, with a run time performance of 12FPS and potential for further improvements.</div

    LIDAR-Camera Fusion for Road Detection Using Fully Convolutional Neural Networks

    Full text link
    In this work, a deep learning approach has been developed to carry out road detection by fusing LIDAR point clouds and camera images. An unstructured and sparse point cloud is first projected onto the camera image plane and then upsampled to obtain a set of dense 2D images encoding spatial information. Several fully convolutional neural networks (FCNs) are then trained to carry out road detection, either by using data from a single sensor, or by using three fusion strategies: early, late, and the newly proposed cross fusion. Whereas in the former two fusion approaches, the integration of multimodal information is carried out at a predefined depth level, the cross fusion FCN is designed to directly learn from data where to integrate information; this is accomplished by using trainable cross connections between the LIDAR and the camera processing branches. To further highlight the benefits of using a multimodal system for road detection, a data set consisting of visually challenging scenes was extracted from driving sequences of the KITTI raw data set. It was then demonstrated that, as expected, a purely camera-based FCN severely underperforms on this data set. A multimodal system, on the other hand, is still able to provide high accuracy. Finally, the proposed cross fusion FCN was evaluated on the KITTI road benchmark where it achieved excellent performance, with a MaxF score of 96.03%, ranking it among the top-performing approaches

    Convolutional Patch Networks with Spatial Prior for Road Detection and Urban Scene Understanding

    Full text link
    Classifying single image patches is important in many different applications, such as road detection or scene understanding. In this paper, we present convolutional patch networks, which are convolutional networks learned to distinguish different image patches and which can be used for pixel-wise labeling. We also show how to incorporate spatial information of the patch as an input to the network, which allows for learning spatial priors for certain categories jointly with an appearance model. In particular, we focus on road detection and urban scene understanding, two application areas where we are able to achieve state-of-the-art results on the KITTI as well as on the LabelMeFacade dataset. Furthermore, our paper offers a guideline for people working in the area and desperately wandering through all the painstaking details that render training CNs on image patches extremely difficult.Comment: VISAPP 2015 pape

    Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

    Full text link
    An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multilane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles

    ITS implementation plan for the Gold Coast area

    Get PDF
    ITS needs to be used to reinforce the planned major changes to the road functional hierarchy in the District, namely: • the use of Southport-Burleigh Rd. (SBR) as the major regional corridor; • the removal of through traffic from the GCH; • the use of Oxley Dr./Olsen Av./Ross St./NBR as another major north-south by-pass; • the use of Smith St.; NSR/Queen St.; NBR and Reedy Creek Rd. – West Burleigh Road as the major east-west access corridors. There is a need to integrate the proposed ITS measures into the current related plans for the Pacific Motorway and into the overall traffic control strategies for the area as a whole. In addition, the staging of the proposed plan needs to take into account the planned DMR capital Works Program. An index representing the degree of priority to be attached to each network link was developed to assist in the phased implementation of ITS technologies over the next 5 years. 'ITS Index' is made up of five variables, namely: • Accident rate factor • AADT • Volume/Capacity ratio • Delay • % Commercial Vehicles The main components of the ITS plan are shown diagrammatically in Figure 1. The latter assumes that the high level of ITS implementation on the Pacific Motorway will be extended in time to the remainder of that Highway. To assist in the implementation of the road hierarchy system, a new static signage plan should be implemented. This plan needs to reinforce the changes by clearly assigning single road names to corridors and by placing new signs at appropriate locations. Capturing Traffic Data The following corridors should be equipped with automatic traffic monitoring capability in priority order: High Priority ? SBR corridor from Smith St. connection to Reedy Creek Rd. ? Smith St. from Pacific Highway to High St. ? GCH from Pacific Highway to North St. Medium Priority ? Nerang-Broadbeach Rd/Ross St. to Nerang-Southport Rd. ? Nerang-Southport Rd from Pacific Highway to SBR ? Nerang-Broadbeach Rd from Pacific Highway to SBR The Smith St. link from the Pacific Motorway to Olsen Ave. should be considered as a freeway for monitoring purposes. The GCH along the coastal strip needs to be treated as a local distributor rather than as the major corridor. As a result, the future traffic surveillance priority should be low. At least one permanent environmental (vehicle emissions) monitoring station should be set up as part of the ITS plan. The most appropriate site for such a station would seem to be on the SBR corridor at the vicinity of Hooker Blv. intersection. Pacific Highway The Pacific Motorway project will set the benchmark for freeway incident detection and traffic management in the State. The high level of ITS implementation on the Motorway section will create a significant gap in performance and expectation, relative to the remainder of the Highway. It is recommended that the southern sections of the Pacific Highway be equipped to the equivalent level of traffic data collection and surveillance as the newly upgraded Motorway section, under a staged program. Travel Time Savings The travel time benefits of the full implementation of ITS over the network are likely to be of the order of at least 5 percent of vehicle-hours travelled on the affected links. At a discount rate of 6 percent, the total present value of the gross travel time benefit over 10 years is of the order of $200 million
    • …
    corecore