23,989 research outputs found

    A Bayesian Programming Approach to Car-following Model Calibration and Validation using Limited Data

    Full text link
    Traffic simulation software is used by transportation researchers and engineers to design and evaluate changes to roadways. These simulators are driven by models of microscopic driver behavior from which macroscopic measures like flow and congestion can be derived. Many models are designed for a subset of possible traffic scenarios and roadway configurations, while others have no explicit constraints on their application. Work zones (WZs) are one scenario for which no model to date has reproduced realistic driving behavior. This makes it difficult to optimize for safety and other metrics when designing a WZ. The Federal Highway Administration commissioned the USDOT Volpe Center to develop a car-following (CF) model for use in microscopic simulators that can capture and reproduce driver behavior accurately within and outside of WZs. Volpe also performed a naturalistic driving study to collect telematics data from vehicles driven on roads with WZs for use in model calibration. During model development, Volpe researchers observed difficulties in calibrating their model, leaving them to question whether there existed flaws in their model, in the data, or in the procedure used to calibrate the model using the data. In this thesis, I use Bayesian methods for data analysis and parameter estimation to explore and, where possible, address these questions. First, I use Bayesian inference to measure the sufficiency of the size of the data set. Second, I compare the procedure and results of the genetic algorithm based calibration performed by the Volpe researchers with those of Bayesian calibration. Third, I explore the benefits of modeling CF hierarchically. Finally, I apply what was learned in the first three phases using an established CF model, Wiedemann 99, to the probabilistic modeling of the Volpe model. Validation is performed using information criteria as an estimate of predictive accuracy.Comment: Master's thesis, 64 pages, 10 tables, 9 figure

    On-the-fly adaptivity for nonlinear twoscale simulations using artificial neural networks and reduced order modeling

    Get PDF
    A multi-fidelity surrogate model for highly nonlinear multiscale problems is proposed. It is based on the introduction of two different surrogate models and an adaptive on-the-fly switching. The two concurrent surrogates are built incrementally starting from a moderate set of evaluations of the full order model. Therefore, a reduced order model (ROM) is generated. Using a hybrid ROM-preconditioned FE solver, additional effective stress-strain data is simulated while the number of samples is kept to a moderate level by using a dedicated and physics-guided sampling technique. Machine learning (ML) is subsequently used to build the second surrogate by means of artificial neural networks (ANN). Different ANN architectures are explored and the features used as inputs of the ANN are fine tuned in order to improve the overall quality of the ML model. Additional ANN surrogates for the stress errors are generated. Therefore, conservative design guidelines for error surrogates are presented by adapting the loss functions of the ANN training in pure regression or pure classification settings. The error surrogates can be used as quality indicators in order to adaptively select the appropriate -- i.e. efficient yet accurate -- surrogate. Two strategies for the on-the-fly switching are investigated and a practicable and robust algorithm is proposed that eliminates relevant technical difficulties attributed to model switching. The provided algorithms and ANN design guidelines can easily be adopted for different problem settings and, thereby, they enable generalization of the used machine learning techniques for a wide range of applications. The resulting hybrid surrogate is employed in challenging multilevel FE simulations for a three-phase composite with pseudo-plastic micro-constituents. Numerical examples highlight the performance of the proposed approach

    Issues and concerns of microscopic calibration process at different network levels : case study of Pacific Motorway

    Get PDF
    Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such microcalibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, lane distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management

    Calibration and Validation of A Shared space Model: A Case Study

    Get PDF
    Shared space is an innovative streetscape design that seeks minimum separation between vehicle traffic and pedestrians. Urban design is moving toward space sharing as a means of increasing the community texture of street surroundings. Its unique features aim to balance priorities and allow cars and pedestrians to coexist harmoniously without the need to dictate behavior. There is, however, a need for a simulation tool to model future shared space schemes and to help judge whether they might represent suitable alternatives to traditional street layouts. This paper builds on the authors’ previously published work in which a shared space microscopic mixed traffic model based on the social force model (SFM) was presented, calibrated, and evaluated with data from the shared space link typology of New Road in Brighton, United Kingdom. Here, the goal is to explore the transferability of the authors’ model to a similar shared space typology and investigate the effect of flow and ratio of traffic modes. Data recorded from the shared space scheme of Exhibition Road, London, were collected and analyzed. The flow and speed of cars and segregation between pedestrians and cars are greater on Exhibition Road than on New Road. The rule-based SFM for shared space modeling is calibrated and validated with the real data. On the basis of the results, it can be concluded that shared space schemes are context dependent and that factors such as the infrastructural design of the environment and the flow and speed of pedestrians and vehicles affect the willingness to share space

    Simulating the Impact of Traffic Calming Strategies

    Get PDF
    This study assessed the impact of traffic calming measures to the speed, travel times and capacity of residential roadways. The study focused on two types of speed tables, speed humps and a raised crosswalk. A moving test vehicle equipped with GPS receivers that allowed calculation of speeds and determination of speed profiles at 1s intervals were used. Multi-regime model was used to provide the best fit using steady state equations; hence the corresponding speed-flow relationships were established for different calming scenarios. It was found that capacities of residential roadway segments due to presence of calming features ranged from 640 to 730 vph. However, the capacity varied with the spacing of the calming features in which spacing speed tables at 1050 ft apart caused a 23% reduction in capacity while 350-ft spacing reduced capacity by 32%. Analysis showed a linear decrease of capacity of approximately 20 vphpl, 37 vphpl and 34 vphpl when 17 ft wide speed tables were spaced at 350 ft, 700 ft, and 1050 ft apart respectively. For speed hump calming features, spacing humps at 350 ft reduced capacity by about 33% while a 700 ft spacing reduced capacity by 30%. The study concludes that speed tables are slightly better than speed humps in terms of preserving the roadway capacity. Also, traffic calming measures significantly reduce the speeds of vehicles, and it is best to keep spacing of 630 ft or less to achieve desirable crossing speeds of less or equal to 15 mph especially in a street with schools nearby. A microscopic simulation model was developed to replicate the driving behavior of traffic on urban road diets roads to analyze the influence of bus stops on traffic flow and safety. The impacts of safety were assessed using surrogate measures of safety (SSAM). The study found that presence of a bus stops for 10, 20 and 30 s dwell times have almost 9.5%, 12%, and 20% effect on traffic speed reductions when 300 veh/hr flow is considered. A comparison of reduction in speed of traffic on an 11 ft wide road lane of a road diet due to curbside stops and bus bays for a mean of 30s with a standard deviation of 5s dwell time case was conducted. Results showed that a bus stop bay with the stated bus dwell time causes an approximate 8% speed reduction to traffic at a flow level of about 1400 vph. Analysis of the trajectories from bust stop locations showed that at 0, 25, 50, 75, 100, 125, 150, and 175 feet from the intersection the number of conflicts is affected by the presence and location of a curbside stop on a segment with a road diet

    A Bayesian Programming Approach to Car-following Model Calibration and Validation using Limited Data

    Get PDF
    Traffic simulation software is used by transportation researchers and engineers to design and evaluate changes to roadway networks. Underlying these simulators are mathematical models of microscopic driver behavior from which macroscopic measures of flow and congestion can be recovered. Many models are intended to apply to only a subset of possible traffic scenarios and roadway configurations, while others do not have any explicit constraint on their applicability. Work zones on highways are one scenario for which no model invented to date has been shown to accurately reproduce realistic driving behavior. This makes it difficult to optimize for safety and other metrics when designing a work zone. The Federal Highway Administration (FHWA) has commissioned the Volpe National Transportation Systems Center (Volpe) to develop a new car-following model, the Work Zone Driver Model (WZDM), for use in microscopic simulators that captures and reproduces driver behavior equally well within and outside of work zones. Volpe also performed a naturalistic driving study (NDS) to collect telematics data from vehicles driven on highways and urban roads that included work zones for use in model calibration. The data variables are relevant to the car-following model’s prediction task. During model development, Volpe researchers observed difficulties in calibrating their model, leaving them to question whether there existed flaws in their model, in the data, or in the procedure used to calibrate the model using the data. In this thesis, I use Bayesian methods for data analysis and parameter estimation to explore and, where possible, address these questions. First, I use Bayesian inference to measure the sufficiency of the size of the NDS data set. Second, I compare the procedure and results of the genetic algorithm-based calibration performed by the Volpe researchers with those of Bayesian calibration. Third, I explore the benefits of modeling car-following hierarchically. Finally, I apply what was learned in the first three phases using an established car-following model to the probabilistic modeling of WZDM. Validation is performed using information criteria as an estimate of predictive accuracy. A third model used for comparison with WZDM in the simulator, Wiedemann ’99, is also modeled probabilistically

    A formulation of the relaxation phenomenon for lane changing dynamics in an arbitrary car following model

    Full text link
    Lane changing dynamics are an important part of traffic microsimulation and are vital for modeling weaving sections and merge bottlenecks. However, there is often much more emphasis placed on car following and gap acceptance models, whereas lane changing dynamics such as tactical, cooperation, and relaxation models receive comparatively little attention. This paper develops a general relaxation model which can be applied to an arbitrary parametric or nonparametric microsimulation model. The relaxation model modifies car following dynamics after a lane change, when vehicles can be far from equilibrium. Relaxation prevents car following models from reacting too strongly to the changes in space headway caused by lane changing, leading to more accurate and realistic simulated trajectories. We also show that relaxation is necessary for correctly simulating traffic breakdown with realistic values of capacity drop

    A Framework for Developing and Integrating Effective Routing Strategies Within the Emergency Management Decision-Support System, Research Report 11-12

    Get PDF
    This report describes the modeling, calibration, and validation of a VISSIM traffic-flow simulation of the San José, California, downtown network and examines various evacuation scenarios and first-responder routings to assess strategies that would be effective in the event of a no-notice disaster. The modeled network required a large amount of data on network geometry, signal timings, signal coordination schemes, and turning-movement volumes. Turning-movement counts at intersections were used to validate the network with the empirical formula-based measure known as the GEH statistic. Once the base network was tested and validated, various scenarios were modeled to estimate evacuation and emergency vehicle arrival times. Based on these scenarios, a variety of emergency plans for San José’s downtown traffic circulation were tested and validated. The model could be used to evaluate scenarios in other communities by entering their community-specific data
    • …
    corecore