5 research outputs found

    Image-Based Roadway Assessment Using Convolutional Neural Networks

    Get PDF
    Road crashes are one of the main causes of death in the United States. To reduce the number of accidents, roadway assessment programs take a proactive approach, collecting data and identifying high-risk roads before crashes occur. However, the cost of data acquisition and manual annotation has restricted the effect of these programs. In this thesis, we propose methods to automate the task of roadway safety assessment using deep learning. Specifically, we trained convolutional neural networks on publicly available roadway images to predict safety-related metrics: the star rating score and free-flow speed. Inference speeds for our methods are mere milliseconds, enabling large-scale roadway study at a fraction of the cost of manual approaches

    Estimating Free-Flow Speed with LiDAR and Overhead Imagery

    Get PDF
    Understanding free-flow speed is fundamental to transportation engineering in order to improve traffic flow, control, and planning. The free-flow speed of a road segment is the average speed of automobiles unaffected by traffic congestion or delay. Collecting speed data across a state is both expensive and time consuming. Some approaches have been presented to estimate speed using geometric road features for certain types of roads in limited environments. However, estimating speed at state scale for varying landscapes, environments, and road qualities has been relegated to manual engineering and expensive sensor networks. This thesis proposes an automated approach for estimating free-flow speed using LiDAR (Light Detection and Ranging) point clouds and satellite imagery. Employing deep learning for high-level pattern recognition and feature extraction, we present methods for predicting free-flow speed across the state of Kentucky

    Learning to Map the Visual and Auditory World

    Get PDF
    The appearance of the world varies dramatically not only from place to place but also from hour to hour and month to month. Billions of images that capture this complex relationship are uploaded to social-media websites every day and often are associated with precise time and location metadata. This rich source of data can be beneficial to improve our understanding of the globe. In this work, we propose a general framework that uses these publicly available images for constructing dense maps of different ground-level attributes from overhead imagery. In particular, we use well-defined probabilistic models and a weakly-supervised, multi-task training strategy to provide an estimate of the expected visual and auditory ground-level attributes consisting of the type of scenes, objects, and sounds a person can experience at a location. Through a large-scale evaluation on real data, we show that our learned models can be used for applications including mapping, image localization, image retrieval, and metadata verification
    corecore