29 research outputs found

    A Local-Global LDA Model for Discovering Geographical Topics from Social Media

    Full text link
    Micro-blogging services can track users' geo-locations when users check-in their places or use geo-tagging which implicitly reveals locations. This "geo tracking" can help to find topics triggered by some events in certain regions. However, discovering such topics is very challenging because of the large amount of noisy messages (e.g. daily conversations). This paper proposes a method to model geographical topics, which can filter out irrelevant words by different weights in the local and global contexts. Our method is based on the Latent Dirichlet Allocation (LDA) model but each word is generated from either a local or a global topic distribution by its generation probabilities. We evaluated our model with data collected from Weibo, which is currently the most popular micro-blogging service for Chinese. The evaluation results demonstrate that our method outperforms other baseline methods in several metrics such as model perplexity, two kinds of entropies and KL-divergence of discovered topics

    Scaling in Words on Twitter

    Get PDF
    Scaling properties of language are a useful tool for understanding generative processes in texts. We investigate the scaling relations in citywise Twitter corpora coming from the Metropolitan and Micropolitan Statistical Areas of the United States. We observe a slightly superlinear urban scaling with the city population for the total volume of the tweets and words created in a city. We then find that a certain core vocabulary follows the scaling relationship of that of the bulk text, but most words are sensitive to city size, exhibiting a super- or a sublinear urban scaling. For both regimes we can offer a plausible explanation based on the meaning of the words. We also show that the parameters for Zipf's law and Heaps law differ on Twitter from that of other texts, and that the exponent of Zipf's law changes with city size

    Describing and Understanding Neighborhood Characteristics through Online Social Media

    Full text link
    Geotagged data can be used to describe regions in the world and discover local themes. However, not all data produced within a region is necessarily specifically descriptive of that area. To surface the content that is characteristic for a region, we present the geographical hierarchy model (GHM), a probabilistic model based on the assumption that data observed in a region is a random mixture of content that pertains to different levels of a hierarchy. We apply the GHM to a dataset of 8 million Flickr photos in order to discriminate between content (i.e., tags) that specifically characterizes a region (e.g., neighborhood) and content that characterizes surrounding areas or more general themes. Knowledge of the discriminative and non-discriminative terms used throughout the hierarchy enables us to quantify the uniqueness of a given region and to compare similar but distant regions. Our evaluation demonstrates that our model improves upon traditional Naive Bayes classification by 47% and hierarchical TF-IDF by 27%. We further highlight the differences and commonalities with human reasoning about what is locally characteristic for a neighborhood, distilled from ten interviews and a survey that covered themes such as time, events, and prior regional knowledgeComment: Accepted in WWW 2015, 2015, Florence, Ital

    Accessibility-based reranking in multimedia search engines

    Get PDF
    Traditional multimedia search engines retrieve results based mostly on the query submitted by the user, or using a log of previous searches to provide personalized results, while not considering the accessibility of the results for users with vision or other types of impairments. In this paper, a novel approach is presented which incorporates the accessibility of images for users with various vision impairments, such as color blindness, cataract and glaucoma, in order to rerank the results of an image search engine. The accessibility of individual images is measured through the use of vision simulation filters. Multi-objective optimization techniques utilizing the image accessibility scores are used to handle users with multiple vision impairments, while the impairment profile of a specific user is used to select one from the Pareto-optimal solutions. The proposed approach has been tested with two image datasets, using both simulated and real impaired users, and the results verify its applicability. Although the proposed method has been used for vision accessibility-based reranking, it can also be extended for other types of personalization context

    Using Statistical Methods to Determine Geolocation Via Twitter

    Get PDF
    With the ever expanding usage of social media websites such as Twitter, it is possible to use statistical inquires to form a geographic location of a person using solely the content of their tweets. According to a study done in 2010, Zhiyuan Cheng, was able to detect a location of a Twitter user within 100 miles of their actual location 51% of the time. While this may seem like an already significant find, this study was done while Twitter was still finding its ground to stand on. In 2010, Twitter had 75 million unique users registered, as of March 2013, Twitter has around 500 million unique users. In this thesis, my own dataset was collected and using Excel macros, a comparison of my results to that of Cheng’s will see if the results have changed over the three years since his study. If found to be that Cheng’s 51% can be shown more efficiently using a simpler methodology, this could have a significant impact on Homeland Security and cyber security measures

    A Survey of Location Prediction on Twitter

    Full text link
    Locations, e.g., countries, states, cities, and point-of-interests, are central to news, emergency events, and people's daily lives. Automatic identification of locations associated with or mentioned in documents has been explored for decades. As one of the most popular online social network platforms, Twitter has attracted a large number of users who send millions of tweets on daily basis. Due to the world-wide coverage of its users and real-time freshness of tweets, location prediction on Twitter has gained significant attention in recent years. Research efforts are spent on dealing with new challenges and opportunities brought by the noisy, short, and context-rich nature of tweets. In this survey, we aim at offering an overall picture of location prediction on Twitter. Specifically, we concentrate on the prediction of user home locations, tweet locations, and mentioned locations. We first define the three tasks and review the evaluation metrics. By summarizing Twitter network, tweet content, and tweet context as potential inputs, we then structurally highlight how the problems depend on these inputs. Each dependency is illustrated by a comprehensive review of the corresponding strategies adopted in state-of-the-art approaches. In addition, we also briefly review two related problems, i.e., semantic location prediction and point-of-interest recommendation. Finally, we list future research directions.Comment: Accepted to TKDE. 30 pages, 1 figur

    Geo Sensitive Word Discovery

    Get PDF
    Among geolocation related information, in particular, the geo-sensitive word is one of the most critical components. A geo-sensitive word can be a word or phrase for a landmark in the city or county name, abbreviation sports team names in the city, common words or phrases with special meanings in local regions. In this thesis, we propose and evaluate an effective and efficient framework for discovering geo-sensitive words hidden in tweets. This framework overcomes the lack of dataset and embedding alignment problem. There are three key contributions in the proposed framework: (i) a publicly-available dataset containing geo-tagged English tweets from 27 cities in the United States; (ii) a concrete approach to align separately trained word embeddings with Orthogonal Procrustes; (iii) and a well-rounded evaluation framework for geo-sensitive words. The system discovers over 3000 geo-sensitive words in three cities and successfully classified these words into corresponding cities with a 95.32% high accuracy. We also find two key factors that post an impact on the classification performance: (i) feature vector dimension; and (ii) proper learning algorithm

    A location-query-browse graph for contextual recommendation

    Get PDF
    Traditionally, recommender systems modelled the physical and cyber contextual influence on people's moving, querying, and browsing behaviours in isolation. Yet, searching, querying and moving behaviours are intricately linked, especially indoors. Here, we introduce a tripartite location-query-browse graph (LQB) for nuanced contextual recommendations. The LQB graph consists of three kinds of nodes: locations, queries and Web domains. Directed connections only between heterogeneous nodes represent the contextual influences, while connections of homogeneous nodes are inferred from the contextual influences of the other nodes. This tripartite LQB graph is more reliable than any monopartite or bipartite graph in contextual location, query and Web content recommendations. We validate this LQB graph in an indoor retail scenario with extensive dataset of three logs collected from over 120,000 anonymized, opt-in users over a 1-year period in a large inner-city mall in Sydney, Australia. We characterize the contextual influences that correspond to the arcs in the LQB graph, and evaluate the usefulness of the LQB graph for location, query, and Web content recommendations. The experimental results show that the LQB graph successfully captures the contextual influence and significantly outperforms the state of the art in these applications

    @Phillies Tweeting from Philly? Predicting Twitter User Locations with Spatial Word Usage

    Full text link
    Abstract—We study the problem of predicting home locations of Twitter users using contents of their tweet messages. Using three probability models for locations, we compare both the Gaussian Mixture Model (GMM) and the Maximum Likelihood Estimation (MLE). In addition, we propose two novel unsu-pervised methods based on the notions of Non-Localness and Geometric-Localness to prune noisy data from tweet messages. In the experiments, our unsupervised approach improves the baselines significantly and shows comparable results with the supervised state-of-the-art method. For 5,113 Twitter users in the test set, on average, our approach with only 250 selected local words or less is able to predict their home locations (within 100 miles) with the accuracy of 0.499, or has 509.3 miles of average error distance at best. I
    corecore