602 research outputs found

    New Framework and Decision Support Tool to Warrant Detour Operations During Freeway Corridor Incident Management

    Get PDF
    As reported in the literature, the mobility and reliability of the highway systems in the United States have been significantly undermined by traffic delays on freeway corridors due to non-recurrent traffic congestion. Many of those delays are caused by the reduced capacity and overwhelming demand on critical metropolitan corridors coupled with long incident durations. In most scenarios, if proper detour strategies could be implemented in time, motorists could circumvent the congested segments by detouring through parallel arterials, which will significantly improve the mobility of all vehicles in the corridor system. Nevertheless, prior to implementation of any detour strategy, traffic managers need a set of well-justified warrants, as implementing detour operations usually demand substantial amount of resources and manpower. To contend with the aforementioned issues, this study is focused on developing a new multi-criteria framework along with an advanced and computation-friendly tool for traffic managers to decide whether or not and when to implement corridor detour operations. The expected contributions of this study are: * Proposing a well-calibrated corridor simulation network and a comprehensive set of experimental scenarios to take into account many potential affecting factors on traffic manager\u27s decision making process and ensure the effectiveness of the proposed detour warrant tool; * Developing detour decision models, including a two-choice model and a multi-choice model, based on generated optima detour traffic flow rates for each scenario from a diversion control model to allow responsible traffic managers to make best detour decisions during real-time incident management; and * Estimating the resulting benefits for comparison with the operational costs using the output from the diversion control model to further validate the developed detour decision model from the overall societal perspective

    Multi-level Safety Performance Functions For High Speed Facilities

    Get PDF
    High speed facilities are considered the backbone of any successful transportation system; Interstates, freeways, and expressways carry the majority of daily trips on the transportation network. Although these types of roads are relatively considered the safest among other types of roads, they still experience many crashes, many of which are severe, which not only affect human lives but also can have tremendous economical and social impacts. These facts signify the necessity of enhancing the safety of these high speed facilities to ensure better and efficient operation. Safety problems could be assessed through several approaches that can help in mitigating the crash risk on long and short term basis. Therefore, the main focus of the research in this dissertation is to provide a framework of risk assessment to promote safety and enhance mobility on freeways and expressways. Multi-level Safety Performance Functions (SPFs) were developed at the aggregate level using historical crash data and the corresponding exposure and risk factors to identify and rank sites with promise (hot-spots). Additionally, SPFs were developed at the disaggregate level utilizing real-time weather data collected from meteorological stations located at the freeway section as well as traffic flow parameters collected from different detection systems such as Automatic Vehicle Identification (AVI) and Remote Traffic Microwave Sensors (RTMS). These disaggregate SPFs can identify real-time risks due to turbulent traffic conditions and their interactions with other risk factors. In this study, two main datasets were obtained from two different regions. Those datasets comprise historical crash data, roadway geometrical characteristics, aggregate weather and traffic parameters as well as real-time weather and traffic data. iii At the aggregate level, Bayesian hierarchical models with spatial and random effects were compared to Poisson models to examine the safety effects of roadway geometrics on crash occurrence along freeway sections that feature mountainous terrain and adverse weather. At the disaggregate level; a main framework of a proactive safety management system using traffic data collected from AVI and RTMS, real-time weather and geometrical characteristics was provided. Different statistical techniques were implemented. These techniques ranged from classical frequentist classification approaches to explain the relationship between an event (crash) occurring at a given time and a set of risk factors in real time to other more advanced models. Bayesian statistics with updating approach to update beliefs about the behavior of the parameter with prior knowledge in order to achieve more reliable estimation was implemented. Also a relatively recent and promising Machine Learning technique (Stochastic Gradient Boosting) was utilized to calibrate several models utilizing different datasets collected from mixed detection systems as well as real-time meteorological stations. The results from this study suggest that both levels of analyses are important, the aggregate level helps in providing good understanding of different safety problems, and developing policies and countermeasures to reduce the number of crashes in total. At the disaggregate level, real-time safety functions help toward more proactive traffic management system that will not only enhance the performance of the high speed facilities and the whole traffic network but also provide safer mobility for people and goods. In general, the proposed multi-level analyses are useful in providing roadway authorities with detailed information on where countermeasures must be implemented and when resources should be devoted. The study also proves that traffic data collected from different detection systems could be a useful asset that should be utilized iv appropriately not only to alleviate traffic congestion but also to mitigate increased safety risks. The overall proposed framework can maximize the benefit of the existing archived data for freeway authorities as well as for road users

    Utilizing Import Vector Machines to Identify Dangerous Pro-active Traffic Conditions

    Full text link
    Traffic accidents have been a severe issue in metropolises with the development of traffic flow. This paper explores the theory and application of a recently developed machine learning technique, namely Import Vector Machines (IVMs), in real-time crash risk analysis, which is a hot topic to reduce traffic accidents. Historical crash data and corresponding traffic data from Shanghai Urban Expressway System were employed and matched. Traffic conditions are labelled as dangerous (i.e. probably leading to a crash) and safe (i.e. a normal traffic condition) based on 5-minute measurements of average speed, volume and occupancy. The IVM algorithm is trained to build the classifier and its performance is compared to the popular and successfully applied technique of Support Vector Machines (SVMs). The main findings indicate that IVMs could successfully be employed in real-time identification of dangerous pro-active traffic conditions. Furthermore, similar to the "support points" of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM, and its classification rates are similar to those of SVMs. This gives the IVM a computational advantage over the SVM, especially when the size of the training data set is large.Comment: 6 pages, 3 figures, 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC

    A review of travel time estimation and forecasting for advanced traveler information systems

    Get PDF
    Providing on line travel time information to commuters has become an important issue for Advanced Traveler Information Systems and Route Guidance Systems in the past years, due to the increasing traffic volume and congestion in the road networks. Travel time is one of the most useful traffic variables because it is more intuitive than other traffic variables such as flow, occupancy or density, and is useful for travelers in decision making. The aim of this paper is to present a global view of the literature on the modeling of travel time, introducing crucial concepts and giving a thorough classification of the existing tech- niques. Most of the attention will focus on travel time estimation and travel time prediction, which are generally not presented together. The main goals of these models, the study areas and methodologies used to carry out these tasks will be further explored and categorized

    Robust adaptive synchronization of a class of uncertain chaotic systems with unknown time-delay

    Get PDF
    The pavement is a complex structure that is influenced by various environmental and loading conditions. The regular assessment of pavement performance is essential for road network maintenance. International roughness index (IRI) and pavement condition index (PCI) are well-known indices used for smoothness and surface condition assessment, respectively. Machine learning techniques have recently made significant advancements in pavement engineering. This paper presents a novel roughness-distress study using random forest (RF). After determining the PCI and IRI values for the sample units, the PCI prediction process is advanced using RF and random forest trained with a genetic algorithm (RF-GA). The models are validated using correlation coefficient (CC), scatter index (SI), and Willmott’s index of agreement (WI) criteria. For the RF method, the values of the three parameters mentioned were −0.177, 0.296, and 0.281, respectively, whereas in the RF-GA method, −0.031, 0.238, and 0.297 values were obtained for these parameters. This paper aims to fulfill the literature’s identified gaps and help pavement engineers overcome the challenges with the conventional pavement maintenance systems

    Data Driven Approach To Characterize And Forecast The Impact Of Freeway Work Zones On Mobility Using Probe Vehicle Data

    Get PDF
    The presence of work zones on freeways causes traffic congestion and creates hazardous conditions for commuters and construction workers. Traffic congestion resulting from work zones causes negative impacts on traffic mobility (delay), the environment (vehicle emissions), and safety when stopped or slowed vehicles become vulnerable to rear-end collisions. Addressing these concerns, a data-driven approach was utilized to develop methodologies to measure, predict, and characterize the impact work zones have on Michigan interstates. This study used probe vehicle data, collected from GPS devices in vehicles, as the primary source for mobility data. This data was used to fulfill three objectives: develop a systematic approach to characterize work zone mobility, predict the impact of future work zones, and develop a business intelligence support system to plan future work zones. Using probe vehicle data, a performance measurement framework was developed to characterize the spatiotemporal impact of work zones using various data visualization techniques. This framework also included summary statistics of mobility performance for each individual work zone. The result was a Work Zone Mobility Audit (WZMA) template which summarizes metrics into a two-page summary which can be utilized for further monitoring and diagnostics of the mobility impact. A machine learning framework was developed to learn from historical projects and predict the spatiotemporal impact of future work zones on mobility. This approach utilized Random Forest, XGBoost, and Artificial Neural Network classification algorithms to determine the traffic speed range for highway segments while having freeway lane-closures. This framework used a distribution of speed for each freeway segment, as a substitute for hourly traffic volume, and were able to predict speed ranges for future scenarios with up to 85% accuracy. The ANN model reached up to 88% accuracy predicting queueing condition (speed less than 20 mph), which could be utilized to enhance queue warning systems and improve the overall safety and mobility. Mobility data for more than 1,700 historical work zone projects in state of Michigan were assessed to provide a comprehensive overview of the overall impact and significant factors affecting the mobility. A Business Intelligence (BI) approach was utilized to analyze these work zones and present actionable information which helps work zone mobility executives make informed decisions while planning their future work zones. The Pareto principle was also utilized to identify significant projects which accounted for a majority of the overall impact. Chi-square Automatic Interaction Detector, CHAID, algorithm was also applied to discover the relationship between variables affecting the mobility. This statistical method built several decision-trees which could be utilized to determine best, worst, and expected consequence of different work zone strategies

    Design and validation of novel methods for long-term road traffic forecasting

    Get PDF
    132 p.Road traffic management is a critical aspect for the design and planning of complex urban transport networks for which vehicle flow forecasting is an essential component. As a testimony of its paramount relevance in transport planning and logistics, thousands of scientific research works have covered the traffic forecasting topic during the last 50 years. In the beginning most approaches relied on autoregressive models and other analysis methods suited for time series data. During the last two decades, the development of new technology, platforms and techniques for massive data processing under the Big Data umbrella, the availability of data from multiple sources fostered by the Open Data philosophy and an ever-growing need of decision makers for accurate traffic predictions have shifted the spotlight to data-driven procedures. Even in this convenient context, with abundance of open data to experiment and advanced techniques to exploit them, most predictive models reported in literature aim for shortterm forecasts, and their performance degrades when the prediction horizon is increased. Long-termforecasting strategies are more scarce, and commonly based on the detection and assignment to patterns. These approaches can perform reasonably well unless an unexpected event provokes non predictable changes, or if the allocation to a pattern is inaccurate.The main core of the work in this Thesis has revolved around datadriven traffic forecasting, ultimately pursuing long-term forecasts. This has broadly entailed a deep analysis and understanding of the state of the art, and dealing with incompleteness of data, among other lesser issues. Besides, the second part of this dissertation presents an application outlook of the developed techniques, providing methods and unexpected insights of the local impact of traffic in pollution. The obtained results reveal that the impact of vehicular emissions on the pollution levels is overshadowe

    Evaluation of machine learning algorithms as predictive tools in road safety analysis

    Get PDF
    The Highway Safety Manual (HSM)’s road safety management process (RSMP) represents the state-of-the-practice procedure that transportation professionals employ to monitor and improve safety on existing roadway sites. RSMP requires the development of safety performance functions (SPFs), which are the key regression tools in the Highway Safety Manual’s RSMP used to predict crash frequency given a set of roadway and traffic factors. Although developing SPFs using traditional regression modeling have been proven to be reliable tools for road safety predictive analytics, some limitations and constraints have been highlighted in the literature, such as the assumption of a probability distribution, selection of a pre-defined functional form, a possible correlation between independent variables, and possible transferability issues. An alternative to traditional regression models as predictive tools is the use of Machine Learning (ML) algorithms. Although ML provides a new modeling technique, it still has made-in assumptions and their performance in collision frequency modeling needs to be studied. This research 1) compares the prediction performance of three well-known ML algorithms, i.e., Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF), to traditional SPFs, 2) conducts sensitivity analysis and compare ML with the functional form of the negative binomial (NB) model as default traditional regression modeling technique, and 3) applies and validates ML algorithms in network screening (hotspot identification), which is the first step in the RSMP. To achieve these objectives, a dataset of urban signalized and unsignalized intersections from two major municipalities in Saskatchewan (Canada) were considered as a case study. The results showed that the ML prediction accuracies are comparable with that of the NB model. Moreover, the sensitivity analysis proved that ML algorithms predictions are mostly affected by changes in traffic volume, rather than other roadway factors. Lastly, the ML-based measure consistency in identifying hotspots appeared to be comparable to SPF-based measures, e.g., the excess (predicted and expected) average crash frequency. Overall, the results of this research support the use of ML as a predictive tool in network screening, which provides transportation practitioners with an alternative modeling approach to identify collision-prone locations where countermeasures aimed at reducing collision frequency at urban intersections can be installed

    Design and validation of novel methods for long-term road traffic forecasting

    Get PDF
    132 p.Road traffic management is a critical aspect for the design and planning of complex urban transport networks for which vehicle flow forecasting is an essential component. As a testimony of its paramount relevance in transport planning and logistics, thousands of scientific research works have covered the traffic forecasting topic during the last 50 years. In the beginning most approaches relied on autoregressive models and other analysis methods suited for time series data. During the last two decades, the development of new technology, platforms and techniques for massive data processing under the Big Data umbrella, the availability of data from multiple sources fostered by the Open Data philosophy and an ever-growing need of decision makers for accurate traffic predictions have shifted the spotlight to data-driven procedures. Even in this convenient context, with abundance of open data to experiment and advanced techniques to exploit them, most predictive models reported in literature aim for shortterm forecasts, and their performance degrades when the prediction horizon is increased. Long-termforecasting strategies are more scarce, and commonly based on the detection and assignment to patterns. These approaches can perform reasonably well unless an unexpected event provokes non predictable changes, or if the allocation to a pattern is inaccurate.The main core of the work in this Thesis has revolved around datadriven traffic forecasting, ultimately pursuing long-term forecasts. This has broadly entailed a deep analysis and understanding of the state of the art, and dealing with incompleteness of data, among other lesser issues. Besides, the second part of this dissertation presents an application outlook of the developed techniques, providing methods and unexpected insights of the local impact of traffic in pollution. The obtained results reveal that the impact of vehicular emissions on the pollution levels is overshadowe
    corecore