104,348 research outputs found

    Space-time modeling of traffic flow

    Get PDF
    A key concern in transportation planning and traffic management is the ability to forecast traffic flows on a street network. Traffic flows forecasts can be transformed to obtain travel time estimates and then use these as input to travel demand models, dynamic route guidance and congestion management procedures. A variety of mathematical techniques have been proposed for modeling traffic flow on a street network. Briefly, the most widely used theories are: -Kinetic models based on partial differential equations that describe waves of different traffic densities, -deterministic models that use nonlinear equations for the estimation of different car routes, -large scale simulation models such as cellular automata and, -stochastic modeling of traffic density at distinct points in space. One problem with these approaches is that the traffic flow process is characterized by nonstationarities that cannot be taken into account by the vast majority of modeling strategies. However, recent advances in statistical modeling in fields such as econometrics or environmetrics enable us to overcome this problem. The aim of this work is to present how two statistical techniques, namely, vector autoregressive modeling and dynamic space-time modeling can be used to develop efficient and reliable forecasts of traffic flow. The former approach is encountered in the econometrics literature, whereas the later is mostly used in environmetrics. Recent advances in statistical methodology provide powerful tools for traffic flow description and forecasting. For a purely statistical approach to travel time prediction one may consult Rice and van Zwet (2002). In this work, the authors employ a time varying coefficients regression technique that can be easily implemented computationally, but is sensitive to nonstationarities and does not take into account traffic flow information from neighboring points in the network that can significantly improve forecasts. According to our approach, traffic flow measurements, that is count of vehicles and road occupancy obtained at constants time intervals through loop detectors located at various distinct points of a road network, form a multiple time series set. This set can be described by a vector autoregressive process that models each series as a linear combination of past observations of some (optimally selected) components of the vector; in our case the vector is comprised by the different measurement points of traffic flow. For a thorough technical discussion on vector autoregressive processes we refer to Lutkerpohl (1987), whereas a number of applications can be found in Ooms (1994). Nowadays, these models are easily implemented in commercial software like SAS or MATLAB; see for example LeSage (1999). The spatial distribution of the measurement locations and their neighboring relations cannot be incorporated in a vector autoregressive model. However, accounting for this information may optimize model fitting and provide insight into spatial correlation structures that evolve through time. This can be accomplished by applying space-time modeling techniques. The main difference of space-time models encountered in literature with the vector autoregressive ones lies in the inclusion of a weight matrix that defines the neighboring relations and places the appropriate restrictions. For some early references on space-time models, one could consult Pfeifer and Deutsch (1980 a,b); for a Bayesian approach, insensitive to nonstationarities we refer to Wikle, Berliner and Cressie (1998). In this work, we discuss how the space-time methodology can be implemented to traffic flow modeling. The aforementioned modeling strategies are applied in a subset of traffic flow measurements collected every 15 minutes through loop detectors at 74 locations in the city of Athens. A comparative study in terms of model fitting and forecasting accuracy is performed. Univariate time series models are also fitted in each measurement location in order to investigate the relation between a model's dimension and performance. References: LeSage J. P. (1999). Applied Econometrics using MATLAB. Manuscript, Dept. of Economics, University of Toronto Lutkerpohl H. (1987). Forecasting Aggregated Vector ARMA Processes. Lecture Notes in Economics and Mathematical Systems. Springer Verlag Berlin Heidelberg Ooms M. (1994). Empirical Vector Autoregressive Modeling. Springer Verlag Berlin Heidelberg Pfeifer P. E., and Deutsch S. J. (1980a). A three-stage iterative procedure for Space-Time Modeling. Technometrics, 22, 35-47 Pfeifer P. E., and Deutsch S. J. (1980b). Identification and Interpretation of First-Order Space-Time ARMA models. Technometrics, 22, 397-408 Rice J., and van Zwet E. (2002). A simple and effective method for predicting travel times on freeways. Manuscript, Dept. of Statistics, University of California at Berkeley Wikle C. K., Berliner L. M. and Cressie N. (1998). Hierarchical Bayesian space-time models. Environmental and Ecological Statistics, 5, 117-154

    Modelling and inference for the travel times in vehicle routing problems

    Get PDF
    Every day delivery companies need to select routes to deliver goods to their customers. A common method for the formulation and for finding the best route is the vehicle routing problem (VRP). One of the key assumptions when solving a VRP is that the input values are correct. In the case of travel time along a section of road, these values must be predicted in advance. Hence selecting the optimal solution requires accurate predictions. This thesis focuses upon the prediction of travel time along links, such that the predictions will be used in the defined VRP. The road network is split into links, which are connected together to form routes in the VRP. Travel time predictions are generated for each link. We predict the general behaviour of the travel times for each link, using time series forecasting models. These are tested both empirically, against the observed travel time, and theoretically, against the ideal characteristics of a VRP travel time input, including the resulting prediction uncertainty in the VRP. Small input variations are likely to have little impact upon the optimal solution. In contrast, infrequent and unpredicted large delays, e.g., from accidents, which occur outside the general travel time behaviour can change optimal routes. We study the delay behaviour and suggest a novel model consisting of three parts: the delay occurrence rate, length and size. We then suggest ways to input both the delay and the general travel time models to the VRP, which results in an optimal solution that is more robust to delays. Traffic moves from one link into the network, so if one link is busier then the same traffic will flow to the connecting links. We extend the single link model to incorporate information from the surrounding links using a network model. This produces better predictions than the single link models and hence better inputs for the VRP

    Spatio-temporal traffic anomaly detection for urban networks

    Get PDF
    Urban road networks are often affected by disruptions such as accidents and roadworks, giving rise to congestion and delays, which can, in turn, create a wide range of negative impacts to the economy, environment, safety and security. Accurate detection of the onset of traffic anomalies, specifically Recurrent Congestion (RC) and Nonrecurrent Congestion (NRC) in the traffic networks, is an important ITS function to facilitate proactive intervention measures to reduce the level of severity of congestion. A substantial body of literature is dedicated to models with varying levels of complexity that attempt to identify such anomalies. Given the complexity of the problem, however, very less effort is dedicated to the development of methods that attempt to detect traffic anomalies using spatio-temporal features. Driven both by the recent advances in deep learning techniques and the development of Traffic Incident Management Systems (TIMS), the aim of this research is to develop novel traffic anomaly detection models that can incorporate both spatial and temporal traffic information to detect traffic anomalies at a network level. This thesis first reviews the state of the art in traffic anomaly detection techniques, including the existing methods and emerging machine learning and deep learning methods, before identifying the gaps in the current understanding of traffic anomaly and its detection. One of the problems in terms of adapting the deep learning models to traffic anomaly detection is the translation of time series traffic data from multiple locations to the format necessary for the deep learning model to learn the spatial and temporal features effectively. To address this challenging problem and build a systematic traffic anomaly detection method at a network level, this thesis proposes a methodological framework consisting of (a) the translation layer (which is designed to translate the time series traffic data from multiple locations over the road network into a desired format with spatial and temporal features), (b) detection methods and (c) localisation. This methodological framework is subsequently tested for early RC detection and NRC detection. Three translation layers including connectivity matrix, geographical grid translation and spatial temporal translation are presented and evaluated for both RC and NRC detection. The early RC detection approach is a deep learning based method that combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM). The NRC detection, on the other hand, involves only the application of the CNN. The performance of the proposed approach is compared against other conventional congestion detection methods, using a comprehensive evaluation framework that includes metrics such as detection rates and false positive rates, and the sensitivity analysis of time windows as well as prediction horizons. The conventional congestion detection methods used for the comparison include Multilayer Perceptron, Random Forest and Gradient Boost Classifier, all of which are commonly used in the literature. Real-world traffic data from the City of Bath are used for the comparative analysis of RC, while traffic data in conjunction with incident data extracted from Central London are used for NRC detection. The results show that while the connectivity matrix may be capable of extracting features of a small network, the increased sparsity in the matrix in a large network reduces its effectiveness in feature learning compared to geographical grid translation. The results also indicate that the proposed deep learning method demonstrates superior detection accuracy compared to alternative methods and that it can detect recurrent congestion as early as one hour ahead with acceptable accuracy. The proposed method is capable of being implemented within a real-world ITS system making use of traffic sensor data, thereby providing a practically useful tool for road network managers to manage traffic proactively. In addition, the results demonstrate that a deep learning-based approach may improve the accuracy of incident detection and locate traffic anomalies precisely, especially in a large urban network. Finally, the framework is further tested for robustness in terms of network topology, sensor faults and missing data. The robustness analysis demonstrates that the proposed traffic anomaly detection approaches are transferable to different sizes of road networks, and that they are robust in the presence of sensor faults and missing data.Open Acces

    A Long Short-Term Memory Recurrent Neural Network Framework for Network Traffic Matrix Prediction

    Full text link
    Network Traffic Matrix (TM) prediction is defined as the problem of estimating future network traffic from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from experience to classify, process and predict time series with time lags of unknown size. LSTMs have been shown to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose a LSTM RNN framework for predicting short and long term Traffic Matrix (TM) in large networks. By validating our framework on real-world data from GEANT network, we show that our LSTM models converge quickly and give state of the art TM prediction performance for relatively small sized models.Comment: Submitted for peer review. arXiv admin note: text overlap with arXiv:1402.1128 by other author

    NeuTM: A Neural Network-based Framework for Traffic Matrix Prediction in SDN

    Full text link
    This paper presents NeuTM, a framework for network Traffic Matrix (TM) prediction based on Long Short-Term Memory Recurrent Neural Networks (LSTM RNNs). TM prediction is defined as the problem of estimating future network traffic matrix from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from data and classify or predict time series with time lags of unknown size. LSTMs have been shown to model long-range dependencies more accurately than conventional RNNs. NeuTM is a LSTM RNN-based framework for predicting TM in large networks. By validating our framework on real-world data from GEEANT network, we show that our model converges quickly and gives state of the art TM prediction performance.Comment: Submitted to NOMS18. arXiv admin note: substantial text overlap with arXiv:1705.0569
    • ā€¦
    corecore