12,909 research outputs found

    Modeling Taxi Drivers' Behaviour for the Next Destination Prediction

    Full text link
    In this paper, we study how to model taxi drivers' behaviour and geographical information for an interesting and challenging task: the next destination prediction in a taxi journey. Predicting the next location is a well studied problem in human mobility, which finds several applications in real-world scenarios, from optimizing the efficiency of electronic dispatching systems to predicting and reducing the traffic jam. This task is normally modeled as a multiclass classification problem, where the goal is to select, among a set of already known locations, the next taxi destination. We present a Recurrent Neural Network (RNN) approach that models the taxi drivers' behaviour and encodes the semantics of visited locations by using geographical information from Location-Based Social Networks (LBSNs). In particular, RNNs are trained to predict the exact coordinates of the next destination, overcoming the problem of producing, in output, a limited set of locations, seen during the training phase. The proposed approach was tested on the ECML/PKDD Discovery Challenge 2015 dataset - based on the city of Porto -, obtaining better results with respect to the competition winner, whilst using less information, and on Manhattan and San Francisco datasets.Comment: preprint version of a paper submitted to IEEE Transactions on Intelligent Transportation System

    Learning Large-scale Location Embedding From Human Mobility Trajectories with Graphs

    Full text link
    An increasing amount of location-based service (LBS) data is being accumulated and helps to study urban dynamics and human mobility. GPS coordinates and other location indicators are normally low dimensional and only representing spatial proximity, thus difficult to be effectively utilized by machine learning models in Geo-aware applications. Existing location embedding methods are mostly tailored for specific problems that are taken place within areas of interest. When it comes to the scale of a city or even a country, existing approaches always suffer from extensive computational cost and significant data sparsity. Different from existing studies, we propose to learn representations through a GCN-aided skip-gram model named GCN-L2V by considering both spatial connection and human mobility. With a flow graph and a spatial graph, it embeds context information into vector representations. GCN-L2V is able to capture relationships among locations and provide a better notion of similarity in a spatial environment. Across quantitative experiments and case studies, we empirically demonstrate that representations learned by GCN-L2V are effective. As far as we know, this is the first study that provides a fine-grained location embedding at the city level using only LBS records. GCN-L2V is a general-purpose embedding model with high flexibility and can be applied in down-streaming Geo-aware applications

    Towards Automated Urban Planning: When Generative and ChatGPT-like AI Meets Urban Planning

    Full text link
    The two fields of urban planning and artificial intelligence (AI) arose and developed separately. However, there is now cross-pollination and increasing interest in both fields to benefit from the advances of the other. In the present paper, we introduce the importance of urban planning from the sustainability, living, economic, disaster, and environmental perspectives. We review the fundamental concepts of urban planning and relate these concepts to crucial open problems of machine learning, including adversarial learning, generative neural networks, deep encoder-decoder networks, conversational AI, and geospatial and temporal machine learning, thereby assaying how AI can contribute to modern urban planning. Thus, a central problem is automated land-use configuration, which is formulated as the generation of land uses and building configuration for a target area from surrounding geospatial, human mobility, social media, environment, and economic activities. Finally, we delineate some implications of AI for urban planning and propose key research areas at the intersection of both topics.Comment: TSAS Submissio

    City2City: Translating Place Representations across Cities

    Full text link
    Large mobility datasets collected from various sources have allowed us to observe, analyze, predict and solve a wide range of important urban challenges. In particular, studies have generated place representations (or embeddings) from mobility patterns in a similar manner to word embeddings to better understand the functionality of different places within a city. However, studies have been limited to generating such representations of cities in an individual manner and has lacked an inter-city perspective, which has made it difficult to transfer the insights gained from the place representations across different cities. In this study, we attempt to bridge this research gap by treating \textit{cities} and \textit{languages} analogously. We apply methods developed for unsupervised machine language translation tasks to translate place representations across different cities. Real world mobility data collected from mobile phone users in 2 cities in Japan are used to test our place representation translation methods. Translated place representations are validated using landuse data, and results show that our methods were able to accurately translate place representations from one city to another.Comment: A short 4-page version of this work was accepted in ACM SIGSPATIAL Conference 2019. This is the full version with details. In Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. AC

    Urban2Vec: Incorporating Street View Imagery and POIs for Multi-Modal Urban Neighborhood Embedding

    Full text link
    Understanding intrinsic patterns and predicting spatiotemporal characteristics of cities require a comprehensive representation of urban neighborhoods. Existing works relied on either inter- or intra-region connectivities to generate neighborhood representations but failed to fully utilize the informative yet heterogeneous data within neighborhoods. In this work, we propose Urban2Vec, an unsupervised multi-modal framework which incorporates both street view imagery and point-of-interest (POI) data to learn neighborhood embeddings. Specifically, we use a convolutional neural network to extract visual features from street view images while preserving geospatial similarity. Furthermore, we model each POI as a bag-of-words containing its category, rating, and review information. Analog to document embedding in natural language processing, we establish the semantic similarity between neighborhood ("document") and the words from its surrounding POIs in the vector space. By jointly encoding visual, textual, and geospatial information into the neighborhood representation, Urban2Vec can achieve performances better than baseline models and comparable to fully-supervised methods in downstream prediction tasks. Extensive experiments on three U.S. metropolitan areas also demonstrate the model interpretability, generalization capability, and its value in neighborhood similarity analysis.Comment: To appear in Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20

    Key challenges in agent-based modelling for geo-spatial simulation

    Get PDF
    Agent-based modelling (ABM) is fast becoming the dominant paradigm in social simulation due primarily to a worldview that suggests that complex systems emerge from the bottom-up, are highly decentralised, and are composed of a multitude of heterogeneous objects called agents. These agents act with some purpose and their interaction, usually through time and space, generates emergent order, often at higher levels than those at which such agents operate. ABM however raises as many challenges as it seeks to resolve. It is the purpose of this paper to catalogue these challenges and to illustrate them using three somewhat different agent-based models applied to city systems. The seven challenges we pose involve: the purpose for which the model is built, the extent to which the model is rooted in independent theory, the extent to which the model can be replicated, the ways the model might be verified, calibrated and validated, the way model dynamics are represented in terms of agent interactions, the extent to which the model is operational, and the way the model can be communicated and shared with others. Once catalogued, we then illustrate these challenges with a pedestrian model for emergency evacuation in central London, a hypothetical model of residential segregation tuned to London data which elaborates the standard Schelling (1971) model, and an agent-based residential location built according to spatial interactions principles, calibrated to trip data for Greater London. The ambiguities posed by this new style of modelling are drawn out as conclusions

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun
    • 

    corecore