7 research outputs found

    Category-Aware Location Embedding for Point-of-Interest Recommendation

    Full text link
    Recently, Point of interest (POI) recommendation has gained ever-increasing importance in various Location-Based Social Networks (LBSNs). With the recent advances of neural models, much work has sought to leverage neural networks to learn neural embeddings in a pre-training phase that achieve an improved representation of POIs and consequently a better recommendation. However, previous studies fail to capture crucial information about POIs such as categorical information. In this paper, we propose a novel neural model that generates a POI embedding incorporating sequential and categorical information from POIs. Our model consists of a check-in module and a category module. The check-in module captures the geographical influence of POIs derived from the sequence of users' check-ins, while the category module captures the characteristics of POIs derived from the category information. To validate the efficacy of the model, we experimented with two large-scale LBSN datasets. Our experimental results demonstrate that our approach significantly outperforms state-of-the-art POI recommendation methods.Comment: 4 pages, 1 figure

    Learning Large-scale Location Embedding From Human Mobility Trajectories with Graphs

    Full text link
    An increasing amount of location-based service (LBS) data is being accumulated and helps to study urban dynamics and human mobility. GPS coordinates and other location indicators are normally low dimensional and only representing spatial proximity, thus difficult to be effectively utilized by machine learning models in Geo-aware applications. Existing location embedding methods are mostly tailored for specific problems that are taken place within areas of interest. When it comes to the scale of a city or even a country, existing approaches always suffer from extensive computational cost and significant data sparsity. Different from existing studies, we propose to learn representations through a GCN-aided skip-gram model named GCN-L2V by considering both spatial connection and human mobility. With a flow graph and a spatial graph, it embeds context information into vector representations. GCN-L2V is able to capture relationships among locations and provide a better notion of similarity in a spatial environment. Across quantitative experiments and case studies, we empirically demonstrate that representations learned by GCN-L2V are effective. As far as we know, this is the first study that provides a fine-grained location embedding at the city level using only LBS records. GCN-L2V is a general-purpose embedding model with high flexibility and can be applied in down-streaming Geo-aware applications

    City2City: Translating Place Representations across Cities

    Full text link
    Large mobility datasets collected from various sources have allowed us to observe, analyze, predict and solve a wide range of important urban challenges. In particular, studies have generated place representations (or embeddings) from mobility patterns in a similar manner to word embeddings to better understand the functionality of different places within a city. However, studies have been limited to generating such representations of cities in an individual manner and has lacked an inter-city perspective, which has made it difficult to transfer the insights gained from the place representations across different cities. In this study, we attempt to bridge this research gap by treating \textit{cities} and \textit{languages} analogously. We apply methods developed for unsupervised machine language translation tasks to translate place representations across different cities. Real world mobility data collected from mobile phone users in 2 cities in Japan are used to test our place representation translation methods. Translated place representations are validated using landuse data, and results show that our methods were able to accurately translate place representations from one city to another.Comment: A short 4-page version of this work was accepted in ACM SIGSPATIAL Conference 2019. This is the full version with details. In Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. AC

    Spatial Object Recommendation with Hints: When Spatial Granularity Matters

    Full text link
    Existing spatial object recommendation algorithms generally treat objects identically when ranking them. However, spatial objects often cover different levels of spatial granularity and thereby are heterogeneous. For example, one user may prefer to be recommended a region (say Manhattan), while another user might prefer a venue (say a restaurant). Even for the same user, preferences can change at different stages of data exploration. In this paper, we study how to support top-k spatial object recommendations at varying levels of spatial granularity, enabling spatial objects at varying granularity, such as a city, suburb, or building, as a Point of Interest (POI). To solve this problem, we propose the use of a POI tree, which captures spatial containment relationships between POIs. We design a novel multi-task learning model called MPR (short for Multi-level POI Recommendation), where each task aims to return the top-k POIs at a certain spatial granularity level. Each task consists of two subtasks: (i) attribute-based representation learning; (ii) interaction-based representation learning. The first subtask learns the feature representations for both users and POIs, capturing attributes directly from their profiles. The second subtask incorporates user-POI interactions into the model. Additionally, MPR can provide insights into why certain recommendations are being made to a user based on three types of hints: user-aspect, POI-aspect, and interaction-aspect. We empirically validate our approach using two real-life datasets, and show promising performance improvements over several state-of-the-art methods
    corecore