373 research outputs found
One Table to Count Them All: Parallel Frequency Estimation on Single-Board Computers
Sketches are probabilistic data structures that can provide approximate
results within mathematically proven error bounds while using orders of
magnitude less memory than traditional approaches. They are tailored for
streaming data analysis on architectures even with limited memory such as
single-board computers that are widely exploited for IoT and edge computing.
Since these devices offer multiple cores, with efficient parallel sketching
schemes, they are able to manage high volumes of data streams. However, since
their caches are relatively small, a careful parallelization is required. In
this work, we focus on the frequency estimation problem and evaluate the
performance of a high-end server, a 4-core Raspberry Pi and an 8-core Odroid.
As a sketch, we employed the widely used Count-Min Sketch. To hash the stream
in parallel and in a cache-friendly way, we applied a novel tabulation approach
and rearranged the auxiliary tables into a single one. To parallelize the
process with performance, we modified the workflow and applied a form of
buffering between hash computations and sketch updates. Today, many
single-board computers have heterogeneous processors in which slow and fast
cores are equipped together. To utilize all these cores to their full
potential, we proposed a dynamic load-balancing mechanism which significantly
increased the performance of frequency estimation.Comment: 12 pages, 4 figures, 3 algorithms, 1 table, submitted to EuroPar'1
An Improved Interactive Streaming Algorithm for the Distinct Elements Problem
The exact computation of the number of distinct elements (frequency moment
) is a fundamental problem in the study of data streaming algorithms. We
denote the length of the stream by where each symbol is drawn from a
universe of size . While it is well known that the moments can
be approximated by efficient streaming algorithms, it is easy to see that exact
computation of requires space . In previous work, Cormode
et al. therefore considered a model where the data stream is also processed by
a powerful helper, who provides an interactive proof of the result. They gave
such protocols with a polylogarithmic number of rounds of communication between
helper and verifier for all functions in NC. This number of rounds
can quickly make such
protocols impractical.
Cormode et al. also gave a protocol with rounds for the exact
computation of where the space complexity is but the total communication . They managed to give round protocols with
complexity for many other interesting problems
including , Inner product, and Range-sum, but computing exactly with
polylogarithmic space and communication and rounds remained open.
In this work, we give a streaming interactive protocol with rounds
for exact computation of using bits of space and the communication is . The update
time of the verifier per symbol received is .Comment: Submitted to ICALP 201
Almost Optimal Streaming Algorithms for Coverage Problems
Maximum coverage and minimum set cover problems --collectively called
coverage problems-- have been studied extensively in streaming models. However,
previous research not only achieve sub-optimal approximation factors and space
complexities, but also study a restricted set arrival model which makes an
explicit or implicit assumption on oracle access to the sets, ignoring the
complexity of reading and storing the whole set at once. In this paper, we
address the above shortcomings, and present algorithms with improved
approximation factor and improved space complexity, and prove that our results
are almost tight. Moreover, unlike most of previous work, our results hold on a
more general edge arrival model. More specifically, we present (almost) optimal
approximation algorithms for maximum coverage and minimum set cover problems in
the streaming model with an (almost) optimal space complexity of
, i.e., the space is {\em independent of the size of the sets or
the size of the ground set of elements}. These results not only improve over
the best known algorithms for the set arrival model, but also are the first
such algorithms for the more powerful {\em edge arrival} model. In order to
achieve the above results, we introduce a new general sketching technique for
coverage functions: This sketching scheme can be applied to convert an
-approximation algorithm for a coverage problem to a
(1-\eps)\alpha-approximation algorithm for the same problem in streaming, or
RAM models. We show the significance of our sketching technique by ruling out
the possibility of solving coverage problems via accessing (as a black box) a
(1 \pm \eps)-approximate oracle (e.g., a sketch function) that estimates the
coverage function on any subfamily of the sets
Densest Subgraph in Dynamic Graph Streams
In this paper, we consider the problem of approximating the densest subgraph
in the dynamic graph stream model. In this model of computation, the input
graph is defined by an arbitrary sequence of edge insertions and deletions and
the goal is to analyze properties of the resulting graph given memory that is
sub-linear in the size of the stream. We present a single-pass algorithm that
returns a approximation of the maximum density with high
probability; the algorithm uses O(\epsilon^{-2} n \polylog n) space,
processes each stream update in \polylog (n) time, and uses \poly(n)
post-processing time where is the number of nodes. The space used by our
algorithm matches the lower bound of Bahmani et al.~(PVLDB 2012) up to a
poly-logarithmic factor for constant . The best existing results for
this problem were established recently by Bhattacharya et al.~(STOC 2015). They
presented a approximation algorithm using similar space and
another algorithm that both processed each update and maintained a
approximation of the current maximum density in \polylog (n)
time per-update.Comment: To appear in MFCS 201
Addressing Item-Cold Start Problem in Recommendation Systems using Model Based Approach and Deep Learning
Traditional recommendation systems rely on past usage data in order to
generate new recommendations. Those approaches fail to generate sensible
recommendations for new users and items into the system due to missing
information about their past interactions. In this paper, we propose a solution
for successfully addressing item-cold start problem which uses model-based
approach and recent advances in deep learning. In particular, we use latent
factor model for recommendation, and predict the latent factors from item's
descriptions using convolutional neural network when they cannot be obtained
from usage data. Latent factors obtained by applying matrix factorization to
the available usage data are used as ground truth to train the convolutional
neural network. To create latent factor representations for the new items, the
convolutional neural network uses their textual description. The results from
the experiments reveal that the proposed approach significantly outperforms
several baseline estimators
Space-optimal Heavy Hitters with Strong Error Bounds
The problem of finding heavy hitters and approximating the frequencies of items is at the heart of many problems in data stream analysis. It has been observed that several proposed solutions to this problem can outperform their worst-case guarantees on real data. This leads to the question of whether some stronger bounds can be guaranteed. We answer this in the positive by showing that a class of "counter-based algorithms" (including the popular and very space-efficient FREQUENT and SPACESAVING algorithms) provide much stronger approximation guarantees than previously known. Specifically, we show that errors in the approximation of individual elements do not depend on the frequencies of the most frequent elements, but only on the frequency of the remaining "tail." This shows that counter-based methods are the most space-efficient (in fact, space-optimal) algorithms having this strong error bound.
This tail guarantee allows these algorithms to solve the "sparse recovery" problem. Here, the goal is to recover a faithful representation of the vector of frequencies, f. We prove that using space O(k), the algorithms construct an approximation f* to the frequency vector f so that the L1 error ||f -- f*||[subscript 1] is close to the best possible error min[subscript f2] ||f2 -- f||[subscript 1], where f2 ranges over all vectors with at most k non-zero entries. This improves the previously best known space bound of about O(k log n) for streams without element deletions (where n is the size of the domain from which stream elements are drawn). Other consequences of the tail guarantees are results for skewed (Zipfian) data, and guarantees for accuracy of merging multiple summarized streams.David & Lucile Packard Foundation (Fellowship)Center for Massive Data Algorithmics (MADALGO)National Science Foundation (U.S.). (Grant number CCF-0728645
Online Self-Indexed Grammar Compression
Although several grammar-based self-indexes have been proposed thus far,
their applicability is limited to offline settings where whole input texts are
prepared, thus requiring to rebuild index structures for given additional
inputs, which is often the case in the big data era. In this paper, we present
the first online self-indexed grammar compression named OESP-index that can
gradually build the index structure by reading input characters one-by-one.
Such a property is another advantage which enables saving a working space for
construction, because we do not need to store input texts in memory. We
experimentally test OESP-index on the ability to build index structures and
search query texts, and we show OESP-index's efficiency, especially
space-efficiency for building index structures.Comment: To appear in the Proceedings of the 22nd edition of the International
Symposium on String Processing and Information Retrieval (SPIRE2015
Cancer genetics services in the UK
No abstrac
The Frequent Items Problem in Online Streaming under Various Performance Measures
In this paper, we strengthen the competitive analysis results obtained for a
fundamental online streaming problem, the Frequent Items Problem. Additionally,
we contribute with a more detailed analysis of this problem, using alternative
performance measures, supplementing the insight gained from competitive
analysis. The results also contribute to the general study of performance
measures for online algorithms. It has long been known that competitive
analysis suffers from drawbacks in certain situations, and many alternative
measures have been proposed. However, more systematic comparative studies of
performance measures have been initiated recently, and we continue this work,
using competitive analysis, relative interval analysis, and relative worst
order analysis on the Frequent Items Problem.Comment: IMADA-preprint-c
- …