VDGCNeT: A novel network-wide Virtual Dynamic Graph Convolution Neural network and Transformer-based traffic prediction model

Abstract

We address the problem of traffic prediction on large-scale road networks. We propose a novel deep learning model, Virtual Dynamic Graph Convolution Neural Network and Transformer with Gate and Attention mechanisms (VDGCNeT), to comprehensively extract complex, dynamic and hidden spatial dependencies of road networks for achieving high prediction accuracy. For this purpose, we advocate the use of a virtual dynamic road graph that captures the dynamic and hidden spatial dependencies of road segments in real road networks instead of purely relying on the physical road connectivity. We further design a novel framework based on Graph Convolution Neural Network (GCN) and Transformer to analyze dynamic and hidden spatial–temporal features. The gate mechanism is utilized for concatenating learned spatial and temporal features from Spatial and Temporal Transformers, respectively, while the Attention-based Similarity is used to update dynamic road graph. Two real-world traffic datasets from large-scale road networks with different properties are used for training and testing our model. We compare our VDGCNeT against nine other well-known models in the literature. Our results demonstrate that the proposed VDGCNeT is capable of achieving highly accurate predictions – on average 96.77% and 91.68% accuracy on PEMS-BAY and METR-LA datasets respectively. Overall, our VDGCNeT performs the best when compared against other existing models

    Similar works