579 research outputs found

    Classification of road users detected and tracked with LiDAR at intersections

    Get PDF
    Data collection is a necessary component of transportation engineering. Manual data collection methods have proven to be inefficient and limited in terms of the data required for comprehensive traffic and safety studies. Automatic methods are being introduced to characterize the transportation system more accurately and are providing more information to better understand the dynamics between road users. Video data collection is an inexpensive and widely used automated method, but the accuracy of video-based algorithms is known to be affected by obstacles and shadows and the third dimension is lost with video camera data collection. The impressive progress in sensing technologies has encouraged development of new methods for measuring the movements of road users. The Center for Road Safety at Purdue University proposed application of a LiDAR-based algorithm for tracking vehicles at intersections from a roadside location. LiDAR provides a three-dimensional characterization of the sensed environment for better detection and tracking results. The feasibility of this system was analyzed in this thesis using an evaluation methodology to determine the accuracy of the algorithm when tracking vehicles at intersections. According to the implemented method, the LiDAR-based system provides successful detection and tracking of vehicles, and its accuracy is comparable to the results provided by frame-by-frame extraction of trajectory data using video images by human observers. After supporting the suitability of the system for tracking, the second component of this thesis focused on proposing a classification methodology to discriminate between vehicles, pedestrians, and two-wheelers. Four different methodologies were applied to identify the best method for implementation. The KNN algorithm, which is capable of creating adaptive decision boundaries based on the characteristics of similar observations, provided better performance when evaluating new locations. The multinomial logit model did not allow the inclusion of collinear variables into the model. Overfitting of the training data was indicated in the classification tree and boosting methodologies and produced lower performance when the models were applied to the test data. Despite ANOVA analysis not supporting superior performance by a competitor, the objective of classifying movements at intersections under diverse conditions was achieved with the KNN algorithm and was chosen as the method to implement with the existing algorithm

    Pedestrian Behavior Study to Advance Pedestrian Safety in Smart Transportation Systems Using Innovative LiDAR Sensors

    Get PDF
    Pedestrian safety is critical to improving walkability in cities. Although walking trips have increased in the last decade, pedestrian safety remains a top concern. In 2020, 6,516 pedestrians were killed in traffic crashes, representing the most deaths since 1990 (NHTSA, 2020). Approximately 15% of these occurred at signalized intersections where a variety of modes converge, leading to the increased propensity of conflicts. Current signal timing and detection technologies are heavily biased towards vehicular traffic, often leading to higher delays and insufficient walk times for pedestrians, which could result in risky behaviors such as noncompliance. Current detection systems for pedestrians at signalized intersections consist primarily of push buttons. Limitations include the inability to provide feedback to the pedestrian that they have been detected, especially with older devices, and not being able to dynamically extend the walk times if the pedestrians fail to clear the crosswalk. Smart transportation systems play a vital role in enhancing mobility and safety and provide innovative techniques to connect pedestrians, vehicles, and infrastructure. Most research on smart and connected technologies is focused on vehicles; however, there is a critical need to harness the power of these technologies to study pedestrian behavior, as pedestrians are the most vulnerable users of the transportation system. While a few studies have used location technologies to detect pedestrians, this coverage is usually small and favors people with smartphones. However, the transportation system must consider a full spectrum of pedestrians and accommodate everyone. In this research, the investigators first review the previous studies on pedestrian behavior data and sensing technologies. Then the research team developed a pedestrian behavioral data collecting system based on the emerging LiDAR sensors. The system was deployed at two signalized intersections. Two studies were conducted: (a) pedestrian behaviors study at signalized intersections, analyzing the pedestrian waiting time before crossing, generalized perception-reaction time to WALK sign and crossing speed; and (b) a novel dynamic flashing yellow arrow (D-FYA) solution to separate permissive left-turn vehicles from concurrent crossing pedestrians. The results reveal that the pedestrian behaviors may have evolved compared with the recommended behaviors in the pedestrian facility design guideline (e.g., AASHTO’s “Green Book”). The D-FYA solution was also evaluated on the cabinet-in-theloop simulation platform and the improvements were promising. The findings in this study will advance the body of knowledge on equitable traffic safety, especially for pedestrian safety in the future

    Exploring Data Driven Models of Transit Travel Time and Delay

    Get PDF
    Transit travel time and operating speed influence service attractiveness, operating cost, system efficiency and sustainability. The Tri-County Metropolitan Transportation District of Oregon (TriMet) provides public transportation service in the tri-county Portland metropolitan area. TriMet was one of the first transit agencies to implement a Bus Dispatch System (BDS) as a part of its overall service control and management system. TriMet has had the foresight to fully archive the BDS automatic vehicle location and automatic passenger count data for all bus trips at the stop level since 1997. More recently, the BDS system was upgraded to provide stop-level data plus 5-second resolution bus positions between stops. Rather than relying on prediction tools to determine bus trajectories (including stops and delays) between stops, the higher resolution data presents actual bus positions along each trip. Bus travel speeds and intersection signal/queuing delays may be determined using this newer information. This thesis examines the potential applications of higher resolution transit operations data for a bus route in Portland, Oregon, TriMet Route 14. BDS and 5-second resolution data from all trips during the month of October 2014 are used to determine the impacts and evaluate candidate trip time models. Comparisons are drawn between models and some conclusions are drawn regarding the utility of the higher resolution transit data. In previous research inter-stop models were developed based on the use of average or maximum speed between stops. We know that this does not represent realistic conditions of stopping at a signal/crosswalk or traffic congestion along the link. A new inter-stop trip time model is developed using the 5-second resolution data to determine the number of signals encountered by the bus along the route. The variability in inter-stop time is likely due to the effect of the delay superimposed by signals encountered. This newly developed model resulted in statistically significant results. This type of information is important to transit agencies looking to improve bus running times and reliability. These results, the benefits of archiving higher resolution data to understand bus movement between stops, and future research opportunities are also discussed

    Developing a Traffic Safety Diagnostics System for Unmanned Aerial Vehicles UsingDeep Learning Algorithms

    Get PDF
    This thesis presents an automated traffic safety diagnostics solution using deep learning techniques to process traffic videos by Unmanned Aerial Vehicle (UAV). Mask R-CNN is employed to better detect vehicles in UAV videos after video stabilization. The vehicle trajectories are generated when tracking the detected vehicle by Channel and Spatial Reliability Tracking (CSRT) algorithm. During the detection process, missing vehicles could be tracked by the process of identifying stopped vehicles and comparing Intersect of Union (IOU) between the tracking results and the detection results. In addition, rotated bounding rectangles based on the pixel-to- pixel manner masks that are generated by Mask R-CNN detection, which are also introduced to obtain precise vehicle size and location data. Moreover, surrogate safety measures (i.e. post- encroachment time (PET)) are calculated for each conflict event at the pixel level. Therefore, conflicts could be identified through the process of comparing the PET values and the threshold. To be more specific, conflict types that include rear-end, head-on, sideswipe, and angle could be determined. A case study is presented at a typical signalized intersection, the results indicate that the proposed framework could notably improve the accuracy of the output data. Furthermore, by calculating the PET values for each conflict event, an automated traffic safety diagnostic for the studied intersection could be conducted. According to the research, rear-end conflicts are the most prevalent conflict type at the studied location, while one angle collision conflict is identified at the study duration. It is expected that the proposed method could help diagnose the safety problems efficiently with UAVs and appropriate countermeasures could be proposed after then

    Vision-Based Intersection Monitoring: Behavior Analysis & Safety Issues

    Full text link
    The main objective of my dissertation is to provide a vision-based system to automatically understands traffic patterns and analyze intersections. The system leverages the existing traffic cameras to provide safety and behavior analysis of intersection participants including behavior and safety. The first step is to provide a robust detection and tracking system for vehicles and pedestrians of intersection videos. The appearance and motion based detectors are evaluated on test videos and public available datasets are prepared and evaluated. The contextual fusion method is proposed for detecting pedestrians and motion-based technique is proposed for vehicles based on evaluation results. The detections are feed to the tracking system which uses the mutual cooperation of bipartite graph and enhance optical flow. The enhanced optical flow tracker handles the partial occlusion problem, and it cooperates with the detection module to provide long-term tracks of vehicles and pedestrians. The system evaluation shows 13% and 43% improvement in tracking of vehicles and pedestrians respectively when both participants are addressed by the proposed framework. Finally, trajectories are assessed to provide a comprehensive analysis of safety and behavior of intersection participants including vehicles and pedestrians. Different important applications are addressed such as turning movement count, pedestrians crossing count, turning speed, waiting time, queue length, and surrogate safety measurements. The contribution of the proposed methods are shown through the comparison with ground truths for each mentioned application, and finally heat-maps show benefits of using the proposed system through the visual depiction of intersection usage

    Development,Validation, and Integration of AI-Driven Computer Vision System and Digital-twin System for Traffic Safety Dignostics

    Get PDF
    The use of data and deep learning algorithms in transportation research have become increasingly popular in recent years. Many studies rely on real-world data. Collecting accurate traffic data is crucial for analyzing traffic safety. Still, traditional traffic data collection methods that rely on loop detectors and radar sensors are limited to collect macro-level data, and it may fail to monitor complex driver behaviors like lane changing and interactions between road users. With the development of new technologies like in-vehicle cameras, Unmanned Aerial Vehicle (UAV), and surveillance cameras, vehicle trajectory data can be collected from the recorded videos for more comprehensive and microscopic traffic safety analysis. This research presents the development, validation, and integration of three AI-driven computer vision systems for vehicle trajectory extraction and traffic safety research: 1) A.R.C.I.S, an automated framework for safety diagnosis utilizing multi-object detection and tracking algorithm for UAV videos. 2)N.M.E.D.S., A new framework with the ability to detect and predict the key points of vehicles and provide more precise vehicle occupying locations for traffic safety analysis. 3)D.V.E.D.S applied deep learning models to extract information related to drivers\u27 visual environment from the Google Street View (GSV) images. Based on the drone video collected and processed by A.R.C.I.S at various locations, CitySim: a new drone recorded vehicle trajectory dataset that aim to facilitate safety research was introduced. CitySim has vehicle interaction trajectories extracted from 1140- minutes of video recordings, which provide a large-scale naturalistic vehicle trajectory that covers a variety of locations, including basic freeway segments, freeway weaving segments, expressway segments, signalized intersections, stop-controlled intersections, and unique intersections without sign/signal control. The advantage of CitySim over other datasets is that it contains more critical safety events in quantity and severity and provides supporting scenarios for safety-oriented research. In addition, CitySim provides digital twin features, including the 3D base maps and signal timings, which enables a more comprehensive testing environment for safety research, such as autonomous vehicle safety. Based on these digital twin features provided by CitySim, we proposed a Digital Twin framework for CV and pedestrian in-the-loop simulation, which is based on Carla-Sumo Co-simulation and Cave automatic virtual environment (CAVE). The proposed framework is expected to guide the future Digital Twin research, and the architecture we build can serve as the testbed for further research and development
    • …
    corecore