1,048 research outputs found
Context-Based Cultural Visits
Over the last two decades, there have been tremendous advances in mobile technologies, which have increased the interest in studying and developing mobile augmented reality systems, especially in the field of Cultural Heritage. Nowadays, people rely even more on smartphones, for example, when visiting a new city to search for information about monuments and landmarks, and the visitor expects precise and tailored information to his needs.
Therefore, researchers started to investigate innovative approaches for presenting and suggesting digital content related to cultural and historical places around the city, incorporating contextual information about the visitor and his needs. This document presents a novel mobile augmented reality application, NearHeritage, that was developed within the scope of the master's thesis on Electrical and Computers Engineering from the Faculty of Engineering of Porto University (FEUP), in collaboration with INESC TEC.
The research carried out was focused on the importance of utilising modern technologies to assist the visitors in finding and exploring Cultural Heritage. In this way, it is provided not only the nearby points-of-interest of a city but also detailed information about each POI. The solution presented uses built-in sensors and hardware of Android devices and takes advantage of various APIs (Foursquare API, Google Maps API and IntelContextSensing) to retrieve information about the landmarks and the visitor context. Also, these are crucial hardware components for implementing the full potential of augmented reality tools to create innovative contents that increase the overall user experience. All the experiments were conducted in Porto, Portugal, and the final results showcase that the concept of a MAR application can improve the user experience in discovering and learning more about Cultural Heritage around the world, creating an interactive, enjoyable and unforgettable adventure
A Survey of Location Prediction on Twitter
Locations, e.g., countries, states, cities, and point-of-interests, are
central to news, emergency events, and people's daily lives. Automatic
identification of locations associated with or mentioned in documents has been
explored for decades. As one of the most popular online social network
platforms, Twitter has attracted a large number of users who send millions of
tweets on daily basis. Due to the world-wide coverage of its users and
real-time freshness of tweets, location prediction on Twitter has gained
significant attention in recent years. Research efforts are spent on dealing
with new challenges and opportunities brought by the noisy, short, and
context-rich nature of tweets. In this survey, we aim at offering an overall
picture of location prediction on Twitter. Specifically, we concentrate on the
prediction of user home locations, tweet locations, and mentioned locations. We
first define the three tasks and review the evaluation metrics. By summarizing
Twitter network, tweet content, and tweet context as potential inputs, we then
structurally highlight how the problems depend on these inputs. Each dependency
is illustrated by a comprehensive review of the corresponding strategies
adopted in state-of-the-art approaches. In addition, we also briefly review two
related problems, i.e., semantic location prediction and point-of-interest
recommendation. Finally, we list future research directions.Comment: Accepted to TKDE. 30 pages, 1 figur
Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset
Virtual assistants such as Google Assistant, Alexa and Siri provide a
conversational interface to a large number of services and APIs spanning
multiple domains. Such systems need to support an ever-increasing number of
services with possibly overlapping functionality. Furthermore, some of these
services have little to no training data available. Existing public datasets
for task-oriented dialogue do not sufficiently capture these challenges since
they cover few domains and assume a single static ontology per domain. In this
work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing
over 16k multi-domain conversations spanning 16 domains. Our dataset exceeds
the existing task-oriented dialogue corpora in scale, while also highlighting
the challenges associated with building large-scale virtual assistants. It
provides a challenging testbed for a number of tasks including language
understanding, slot filling, dialogue state tracking and response generation.
Along the same lines, we present a schema-guided paradigm for task-oriented
dialogue, in which predictions are made over a dynamic set of intents and
slots, provided as input, using their natural language descriptions. This
allows a single dialogue system to easily support a large number of services
and facilitates simple integration of new services without requiring additional
training data. Building upon the proposed paradigm, we release a model for
dialogue state tracking capable of zero-shot generalization to new APIs, while
remaining competitive in the regular setting.Comment: To appear at AAAI 202
Online Tensor Methods for Learning Latent Variable Models
We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.Comment: JMLR 201
- …