326 research outputs found

    Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions

    Full text link
    We consider the paradigm of a black box AI system that makes life-critical decisions. We propose an "arguing machines" framework that pairs the primary AI system with a secondary one that is independently trained to perform the same task. We show that disagreement between the two systems, without any knowledge of underlying system design or operation, is sufficient to arbitrarily improve the accuracy of the overall decision pipeline given human supervision over disagreements. We demonstrate this system in two applications: (1) an illustrative example of image classification and (2) on large-scale real-world semi-autonomous driving data. For the first application, we apply this framework to image classification achieving a reduction from 8.0% to 2.8% top-5 error on ImageNet. For the second application, we apply this framework to Tesla Autopilot and demonstrate the ability to predict 90.4% of system disengagements that were labeled by human annotators as challenging and needing human supervision

    A Human Visual System Inspired Feature Recognition Method Using Convolutional Neural Networks

    Get PDF
    While significant strides in neural network and machine vision applications have been made in recent years, humans still remain the most proficient at feature extraction and pattern recognition tasks. Some researchers have attempted to utilize select aspects of the human visual system in order to perform application-specific visual tasks. However, none have been able to develop a computational model of the biological human visual system that can perform the many complex pattern recognition tasks that we do as humans. This thesis focuses on significant improvements to an existing human visual system model created by N. Radhi, and the novel implementation of a deep learning system for road detection utilizing non-uniformly sampled images in log-polar coordinate space. A convolutional neural network is used to compare the non-uniformly sampled image model with the conventional uniform structure, with the non-uniform model demonstrating significant increases in processing speed while retaining high validation accuracy. Comparisons between the uniform and non-uniform models when subjected to a variety of preprocessing methods are presented

    Image-Based Roadway Assessment Using Convolutional Neural Networks

    Get PDF
    Road crashes are one of the main causes of death in the United States. To reduce the number of accidents, roadway assessment programs take a proactive approach, collecting data and identifying high-risk roads before crashes occur. However, the cost of data acquisition and manual annotation has restricted the effect of these programs. In this thesis, we propose methods to automate the task of roadway safety assessment using deep learning. Specifically, we trained convolutional neural networks on publicly available roadway images to predict safety-related metrics: the star rating score and free-flow speed. Inference speeds for our methods are mere milliseconds, enabling large-scale roadway study at a fraction of the cost of manual approaches

    Road Pavement Crack Detection Using Deep Learning with Synthetic Data

    Get PDF
    Robust automatic pavement crack detection is critical to automated road condition evaluation. Manual crack detection is extremely time-consuming. Therefore, an automatic road crack detection method is required to boost this process. This study makes literature review of detection issues of road pavement's distress. The paper considers the existing datasets for detection and segmentation distress of road and asphalt pavement. The work presented in this article focuses on deep learning approach based on synthetic training data generation for segmentation of cracks in the driver-view image. A synthetic dataset generation method is presented, and effectiveness of its applicability to the current problem is evaluated. The relevance of the study is emphasized by research on pixel-level automatic damage detection remains a challenging problem, due to heterogeneous pixel intensity, complex crack topology, poor illumination condition, and noisy texture background

    Advanced traffic video analytics for robust traffic accident detection

    Get PDF
    Automatic traffic accident detection is an important task in traffic video analysis due to its key applications in developing intelligent transportation systems. Reducing the time delay between the occurrence of an accident and the dispatch of the first responders to the scene may help lower the mortality rate and save lives. Since 1980, many approaches have been presented for the automatic detection of incidents in traffic videos. In this dissertation, some challenging problems for accident detection in traffic videos are discussed and a new framework is presented in order to automatically detect single-vehicle and intersection traffic accidents in real-time. First, a new foreground detection method is applied in order to detect the moving vehicles and subtract the ever-changing background in the traffic video frames captured by static or non-stationary cameras. For the traffic videos captured during day-time, the cast shadows degrade the performance of the foreground detection and road segmentation. A novel cast shadow detection method is therefore presented to detect and remove the shadows cast by moving vehicles and also the shadows cast by static objects on the road. Second, a new method is presented to detect the region of interest (ROI), which applies the location of the moving vehicles and the initial road samples and extracts the discriminating features to segment the road region. After detecting the ROI, the moving direction of the traffic is estimated based on the rationale that the crashed vehicles often make rapid change of direction. Lastly, single-vehicle traffic accidents and trajectory conflicts are detected using the first-order logic decision-making system. The experimental results using publicly available videos and a dataset provided by the New Jersey Department of Transportation (NJDOT) demonstrate the feasibility of the proposed methods. Additionally, the main challenges and future directions are discussed regarding (i) improving the performance of the foreground segmentation, (ii) reducing the computational complexity, and (iii) detecting other types of traffic accidents
    corecore