20,514 research outputs found
Extracting user spatio-temporal profiles from location based social networks
Report de RecercaLocation Based Social Networks (LBSN) like Twitter or Instagram are a good source for user spatio-temporal behavior. These social network provide a low rate sampling of user's location information during large intervals of time that can be used to discover complex behaviors, including mobility profiles, points of interest or unusual events. This information is important for different domains like mobility route planning, touristic recommendation systems or city planning.
Other approaches have used the data from LSBN to categorize areas of a city depending on the categories of the places that people visit or to discover user behavioral patterns from their visits. The aim of this paper is to analyze how the spatio-temporal behavior of a large number of users in a well limited geographical area can be segmented in different profiles. These behavioral profiles are obtained by means of clustering algorithms that show the different behaviors that people have when living and visiting a city.
The data analyzed was obtained from the public data feeds of Twitter and Instagram inside the area of the city of Barcelona for a period of several months. The analysis of these data shows that these kind of algorithms can be successfully applied to data from any city (or any general area) to discover useful profiles that can be described on terms of the city singular places and areas and their temporal relationships. These profiles can be used as a basis for making decisions in different application domains, specially those related with mobility inside and outside a city.Preprin
Towards Scalable Network Delay Minimization
Reduction of end-to-end network delays is an optimization task with
applications in multiple domains. Low delays enable improved information flow
in social networks, quick spread of ideas in collaboration networks, low travel
times for vehicles on road networks and increased rate of packets in the case
of communication networks. Delay reduction can be achieved by both improving
the propagation capabilities of individual nodes and adding additional edges in
the network. One of the main challenges in such design problems is that the
effects of local changes are not independent, and as a consequence, there is a
combinatorial search-space of possible improvements. Thus, minimizing the
cumulative propagation delay requires novel scalable and data-driven
approaches.
In this paper, we consider the problem of network delay minimization via node
upgrades. Although the problem is NP-hard, we show that probabilistic
approximation for a restricted version can be obtained. We design scalable and
high-quality techniques for the general setting based on sampling and targeted
to different models of delay distribution. Our methods scale almost linearly
with the graph size and consistently outperform competitors in quality
Quick Detection of High-degree Entities in Large Directed Networks
In this paper, we address the problem of quick detection of high-degree
entities in large online social networks. Practical importance of this problem
is attested by a large number of companies that continuously collect and update
statistics about popular entities, usually using the degree of an entity as an
approximation of its popularity. We suggest a simple, efficient, and easy to
implement two-stage randomized algorithm that provides highly accurate
solutions for this problem. For instance, our algorithm needs only one thousand
API requests in order to find the top-100 most followed users in Twitter, a
network with approximately a billion of registered users, with more than 90%
precision. Our algorithm significantly outperforms existing methods and serves
many different purposes, such as finding the most popular users or the most
popular interest groups in social networks. An important contribution of this
work is the analysis of the proposed algorithm using Extreme Value Theory -- a
branch of probability that studies extreme events and properties of largest
order statistics in random samples. Using this theory, we derive an accurate
prediction for the algorithm's performance and show that the number of API
requests for finding the top-k most popular entities is sublinear in the number
of entities. Moreover, we formally show that the high variability among the
entities, expressed through heavy-tailed distributions, is the reason for the
algorithm's efficiency. We quantify this phenomenon in a rigorous mathematical
way
Network Sampling: From Static to Streaming Graphs
Network sampling is integral to the analysis of social, information, and
biological networks. Since many real-world networks are massive in size,
continuously evolving, and/or distributed in nature, the network structure is
often sampled in order to facilitate study. For these reasons, a more thorough
and complete understanding of network sampling is critical to support the field
of network science. In this paper, we outline a framework for the general
problem of network sampling, by highlighting the different objectives,
population and units of interest, and classes of network sampling methods. In
addition, we propose a spectrum of computational models for network sampling
methods, ranging from the traditionally studied model based on the assumption
of a static domain to a more challenging model that is appropriate for
streaming domains. We design a family of sampling methods based on the concept
of graph induction that generalize across the full spectrum of computational
models (from static to streaming) while efficiently preserving many of the
topological properties of the input graphs. Furthermore, we demonstrate how
traditional static sampling algorithms can be modified for graph streams for
each of the three main classes of sampling methods: node, edge, and
topology-based sampling. Our experimental results indicate that our proposed
family of sampling methods more accurately preserves the underlying properties
of the graph for both static and streaming graphs. Finally, we study the impact
of network sampling algorithms on the parameter estimation and performance
evaluation of relational classification algorithms
When Hashes Met Wedges: A Distributed Algorithm for Finding High Similarity Vectors
Finding similar user pairs is a fundamental task in social networks, with
numerous applications in ranking and personalization tasks such as link
prediction and tie strength detection. A common manifestation of user
similarity is based upon network structure: each user is represented by a
vector that represents the user's network connections, where pairwise cosine
similarity among these vectors defines user similarity. The predominant task
for user similarity applications is to discover all similar pairs that have a
pairwise cosine similarity value larger than a given threshold . In
contrast to previous work where is assumed to be quite close to 1, we
focus on recommendation applications where is small, but still
meaningful. The all pairs cosine similarity problem is computationally
challenging on networks with billions of edges, and especially so for settings
with small . To the best of our knowledge, there is no practical solution
for computing all user pairs with, say on large social networks,
even using the power of distributed algorithms.
Our work directly addresses this challenge by introducing a new algorithm ---
WHIMP --- that solves this problem efficiently in the MapReduce model. The key
insight in WHIMP is to combine the "wedge-sampling" approach of Cohen-Lewis for
approximate matrix multiplication with the SimHash random projection techniques
of Charikar. We provide a theoretical analysis of WHIMP, proving that it has
near optimal communication costs while maintaining computation cost comparable
with the state of the art. We also empirically demonstrate WHIMP's scalability
by computing all highly similar pairs on four massive data sets, and show that
it accurately finds high similarity pairs. In particular, we note that WHIMP
successfully processes the entire Twitter network, which has tens of billions
of edges
Beyond Triangles: A Distributed Framework for Estimating 3-profiles of Large Graphs
We study the problem of approximating the -profile of a large graph.
-profiles are generalizations of triangle counts that specify the number of
times a small graph appears as an induced subgraph of a large graph. Our
algorithm uses the novel concept of -profile sparsifiers: sparse graphs that
can be used to approximate the full -profile counts for a given large graph.
Further, we study the problem of estimating local and ego -profiles, two
graph quantities that characterize the local neighborhood of each vertex of a
graph.
Our algorithm is distributed and operates as a vertex program over the
GraphLab PowerGraph framework. We introduce the concept of edge pivoting which
allows us to collect -hop information without maintaining an explicit
-hop neighborhood list at each vertex. This enables the computation of all
the local -profiles in parallel with minimal communication.
We test out implementation in several experiments scaling up to cores
on Amazon EC2. We find that our algorithm can estimate the -profile of a
graph in approximately the same time as triangle counting. For the harder
problem of ego -profiles, we introduce an algorithm that can estimate
profiles of hundreds of thousands of vertices in parallel, in the timescale of
minutes.Comment: To appear in part at KDD'1
Influence Maximization Meets Efficiency and Effectiveness: A Hop-Based Approach
Influence Maximization is an extensively-studied problem that targets at
selecting a set of initial seed nodes in the Online Social Networks (OSNs) to
spread the influence as widely as possible. However, it remains an open
challenge to design fast and accurate algorithms to find solutions in
large-scale OSNs. Prior Monte-Carlo-simulation-based methods are slow and not
scalable, while other heuristic algorithms do not have any theoretical
guarantee and they have been shown to produce poor solutions for quite some
cases. In this paper, we propose hop-based algorithms that can easily scale to
millions of nodes and billions of edges. Unlike previous heuristics, our
proposed hop-based approaches can provide certain theoretical guarantees.
Experimental evaluations with real OSN datasets demonstrate the efficiency and
effectiveness of our algorithms.Comment: Extended version of the conference paper at ASONAM 2017, 11 page
- …