641 research outputs found

    Connected Vehicle Data-Based Tools for Work Zone Active Traffic Management

    Get PDF
    Work zones present challenges to safety and mobility that require agencies to balance limited resources with vital traffic management activities. It is important to obtain operational feedback for successful active traffic management in work zones. Extensive literature exists regarding the impact of congestion and recommendations for work zone design to provide safe and efficient traffic operations. However, it is often infeasible or unsafe to inspect every work zone within an agency’s jurisdiction. This dissertation outlines the use of connected vehicle data, crash data, and geometric data from mobile light detection and ranging (LiDAR) technology for active traffic management in work zones. Back-of-queue crashes on high-speed roads are often severe and present an early opportunity for leveraging connected vehicle data to mitigate queueing. The connected vehicle data presented in this dissertation provides compelling evidence that there are significant opportunities to reduce back-of-queue crashes by warning drivers of unexpected congestion ahead. In 2014 and 2015, approximately 1% of the total mile-hours of Indiana interstates were operating below 45 MPH and were considered congested. Congested conditions were observable in the connected vehicle data prior to 18.5% of all interstate crashes. The congested crash rate was found to be 20.6-24.0 times greater than the uncongested crash rate. A real-time queue alert system was developed to detect queues and notify INDOT personnel via email. When average speeds drop below 45 MPH, queue monitoring algorithms are triggered, and an alert is sent to selected individuals. Still camera images, work schedules, and crash reports were used to ground-truth the alert system. The notification model could be easily extended to in-car notification. A weekly work zone report was developed for use by the Indiana Department of Transportation (INDOT) for the purpose of assessing and improving both mobility and safety in work zones. The report includes a number of graphs, figures, and statistics to present a comprehensive picture of performance. This weekly report provided a mechanism for INDOT staff to maintain situational awareness of which work zones were most challenging for queues and during what periods those were likely to occur. These weekly reports provided the foundation for objective dialog with contractors and project managers to identify mechanisms to minimize queueing and allocate public safety resources. Lastly, this dissertation discusses the integration of LiDAR-generated geometric data with connected vehicle speed data to evaluate the impact of work zone geometry on traffic operations. A LiDAR-mounted vehicle was deployed to a variety of work zones where recurring bottlenecks were identified to collect geometric data. The advantages and disadvantages of the technology are discussed. A number of case studies demonstrate versatility of the technology in transportation applications

    A Routine and Post-disaster Road Corridor Monitoring Framework for the Increased Resilience of Road Infrastructures

    Get PDF

    Mobile Phone-Based Artificial Intelligence Development for Maintenance Asset Management

    Get PDF
    22-8099Transportation asset management needs timely information collection to inform relevant maintenance practices (e.g., resource planning). Traditional data collection methods in transportation asset management require either manual operation or support of unique equipment (e.g., Light Detection and Ranging (LiDAR)), which could be labor-intensive or costly to implement. With the advancement of computing techniques, artificial intelligence (AI) has been developed to be capable of automatically detecting objects in images and videos. In this project, we developed accurate and efficient AI algorithms to automatically collect and analyze transportation asset status, including identification of pavement marking issues, traffic signs, litter & trash, and steel guardrails & concrete barriers. The AI algorithms were developed based on the You Only Look Once (YOLO) framework built on Convolution Neural Network as the deep learning algorithms. Specifically, a smartphone was mounted on the vehicle\u2019s front windshield to collect videos of transportation assets on both highways and local roads. These videos were then converted and processed into labeled images to be training and test datasets for AI algorithm training. Then, AI models were developed for automatic object detection of the listed transportation assets above. The results demonstrate that the developed AI models achieve good performance in identifying targeted objects with over 85% accuracy. The developed AI package is expected to enable timely and efficient information collection of transportation assets, hence, improving road safety

    End-to-End Learning of Semantic Grid Estimation Deep Neural Network with Occupancy Grids

    Get PDF
    International audienceWe propose semantic grid, a spatial 2D map of the environment around an autonomous vehicle consisting of cells which represent the semantic information of the corresponding region such as car, road, vegetation, bikes, etc. It consists of an integration of an occupancy grid, which computes the grid states with a Bayesian filter approach, and semantic segmentation information from monocular RGB images, which is obtained with a deep neural network. The network fuses the information and can be trained in an end-to-end manner. The output of the neural network is refined with a conditional random field. The proposed method is tested in various datasets (KITTI dataset, Inria-Chroma dataset and SYNTHIA) and different deep neural network architectures are compared

    Development and evaluation of low cost 2-d lidar based traffic data collection methods

    Get PDF
    Traffic data collection is one of the essential components of a transportation planning exercise. Granular traffic data such as volume count, vehicle classification, speed measurement, and occupancy, allows managing transportation systems more effectively. For effective traffic operation and management, authorities require deploying many sensors across the network. Moreover, the ascending efforts to achieve smart transportation aspects put immense pressure on planning authorities to deploy more sensors to cover an extensive network. This research focuses on the development and evaluation of inexpensive data collection methodology by using two-dimensional (2-D) Light Detection and Ranging (LiDAR) technology. LiDAR is adopted since it is economical and easily accessible technology. Moreover, its 360-degree visibility and accurate distance information make it more reliable. To collect traffic count data, the proposed method integrates a Continuous Wavelet Transform (CWT), and Support Vector Machine (SVM) into a single framework. Proof-of-Concept (POC) test is conducted in three different places in Newark, New Jersey to examine the performance of the proposed method. The POC test results demonstrate that the proposed method achieves acceptable performances, resulting in 83% ~ 94% accuracy. It is discovered that the proposed method\u27s accuracy is affected by the color of the exterior surface of a vehicle since some colored surfaces do not produce enough reflective rays. It is noticed that the blue and black colors are less reflective, while white-colored surfaces produce high reflective rays. A methodology is proposed that comprises K-means clustering, inverse sensor model, and Kalman filter to obtain trajectories of the vehicles at the intersections. The primary purpose of vehicle detection and tracking is to obtain the turning movement counts at an intersection. A K-means clustering is an unsupervised machine learning technique that clusters the data into different groups by analyzing the smallest mean of a data point from the centroid. The ultimate objective of applying K-mean clustering is to identify the difference between pedestrians and vehicles. An inverse sensor model is a state model of occupancy grid mapping that localizes the detected vehicles on the grid map. A constant velocity model based Kalman filter is defined to track the trajectory of the vehicles. The data are collected from two intersections located in Newark, New Jersey, to study the accuracy of the proposed method. The results show that the proposed method has an average accuracy of 83.75%. Furthermore, the obtained R-squared value for localization of the vehicles on the grid map is ranging between 0.87 to 0.89. Furthermore, a primary cost comparison is made to study the cost efficiency of the developed methodology. The cost comparison shows that the proposed methodology based on 2-D LiDAR technology can achieve acceptable accuracy at a low price and be considered a smart city concept to conduct extensive scale data collection

    A NOVEL APPROACH TO DAMAGE DETECTION USING STRUCTURAL HEALTH MONITORING DATA AND CONVOLUTIONAL NEURAL NETWORKS

    Get PDF
    This study investigates the efficiency of Convolutional Neural Networks (CNNs) to detect structural damage from measured structural response. In this regard, strain gages were used to measure structural response from live loads. Data used during experimentation was gathered through the testing of a full-scale concrete bridge mockup. The test captured the response of the mock-up bridge utilizing 24 transversely oriented resistance-based strain gages under similar loading conditions, but with different levels of damage induced. There are three levels of damage the bridge was subjected to; crash-induced damage to the barrier, transverse cut on the entire barrier, and transverse cut along the deck. Live load testing was done for undamaged and damaged conditions by running vehicles at various speeds over the deck and recording the response overall strain gages. Previous studies have compared the effectiveness of Singular Value Decomposition (SVD) and Independent Component Analysis (ICA) to apply a novelty index to the same data to detect damage. This study seeks to expand on this investigation by utilizing the data as a full snapshot matrix converted into a 2D greyscale image to classify frames as Damaged or Undamaged through a Supervised approach. The supervised CNN uses two convolutional layers with dropout and a fully connected output layer and reached an accuracy of 95% with only 30% of the total dataset used as training data. This study shows that CNNs provide a robust way of detecting damage from full data snapshots represented via greyscale images

    Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms

    Get PDF
    This work is a contribution to understanding multi-object traffic scenes from video sequences. All data is provided by a camera system which is mounted on top of the autonomous driving platform AnnieWAY. The proposed probabilistic generative model reasons jointly about the 3D scene layout as well as the 3D location and orientation of objects in the scene. In particular, the scene topology, geometry as well as traffic activities are inferred from short video sequences

    Embarking on the Autonomous Journey: A Strikingly Engineered Car Control System Design

    Get PDF
    openThis thesis develops an autonomous car control system with Raspberry Pi. Two predictive models are implemented: a convolutional neural network (CNN) using machine learning and an input-based decision tree model using sensor data. The Raspberry Module controls the car hardware and acquires real-time camera data with OpenCV. A dedicated web server and event stream processor process data in real-time using the trained neural network model, facilitating real-time decision-making. Unity and Meta Quest 2 VR set create the VR interface, while a generic DIY kit from Amazon and Raspberry PI provide the car hardware inputs. This research demonstrates the potential of VR in automotive communication, enhancing autonomous car testing and user experience.This thesis develops an autonomous car control system with Raspberry Pi. Two predictive models are implemented: a convolutional neural network (CNN) using machine learning and an input-based decision tree model using sensor data. The Raspberry Module controls the car hardware and acquires real-time camera data with OpenCV. A dedicated web server and event stream processor process data in real-time using the trained neural network model, facilitating real-time decision-making. Unity and Meta Quest 2 VR set create the VR interface, while a generic DIY kit from Amazon and Raspberry PI provide the car hardware inputs. This research demonstrates the potential of VR in automotive communication, enhancing autonomous car testing and user experience
    • …
    corecore