38 research outputs found
Recognizing well-parenthesized expressions in the streaming model
Motivated by a concrete problem and with the goal of understanding the sense
in which the complexity of streaming algorithms is related to the complexity of
formal languages, we investigate the problem Dyck(s) of checking matching
parentheses, with different types of parenthesis.
We present a one-pass randomized streaming algorithm for Dyck(2) with space
\Order(\sqrt{n}\log n), time per letter \polylog (n), and one-sided error.
We prove that this one-pass algorithm is optimal, up to a \polylog n factor,
even when two-sided error is allowed. For the lower bound, we prove a direct
sum result on hard instances by following the "information cost" approach, but
with a few twists. Indeed, we play a subtle game between public and private
coins. This mixture between public and private coins results from a balancing
act between the direct sum result and a combinatorial lower bound for the base
case.
Surprisingly, the space requirement shrinks drastically if we have access to
the input stream in reverse. We present a two-pass randomized streaming
algorithm for Dyck(2) with space \Order((\log n)^2), time \polylog (n) and
one-sided error, where the second pass is in the reverse direction. Both
algorithms can be extended to Dyck(s) since this problem is reducible to
Dyck(2) for a suitable notion of reduction in the streaming model.Comment: 20 pages, 5 figure
Improved Streaming Algorithm for Dyck(s) Recognition
Keeping in mind, that any context free language can be mapped to a subset of Dyck languages and by seeing various
database applications of Dyck, mainly verifying the well-formedness of XML file, we study the randomized streaming
algorithms for the recognition of Dyck(s) languages, with s different types of parenthesis. The main motivation of this
work is well known space bound for any T-pass streaming algorithm is
(√n/T).
Let x be the input stream of length n with maximum height hmax. Here we present a single-pass randomized streaming
algorithms to decide the membership of x in Dyck(s) using Counting Bloomfilter (CBF) with space O (hmax) bits,
ploylog(n) time per letter with two-sided error probability. Two-sided error is because of the false negative and false
positives of counting bloomfilter. This algorithms denies the necessity of streaming reduction of Dyck(s) into Dyck(2),
that reduces the space even further by the factor of O (log s), compared to those uses streaming reduction.
We also present an improved single-pass randomized streaming algorithm for recognizing Dyck(2) with space O (√n)
bits, which is the proven lower bound. Time bound is same polylog(n), as other existing algorithms and error is one-sided.
In this algorithm, we extended the existing approach of periodically compressing stack information. Existing approach
uses two stacks and a linear hash function, instead of this we are using three stacks and same linear hash function to
achieve space lower bound of O (√n).
We also present another single-pass streaming algorithm with O (hmax) space that uses counting bloomfilter and
directly acts on Dyck(s
Quantum Chebyshev's Inequality and Applications
In this paper we provide new quantum algorithms with polynomial speed-up for
a range of problems for which no such results were known, or we improve
previous algorithms. First, we consider the approximation of the frequency
moments of order in the multi-pass streaming model with
updates (turnstile model). We design a -pass quantum streaming algorithm
with memory satisfying a tradeoff of ,
whereas the best classical algorithm requires . Then,
we study the problem of estimating the number of edges and the number
of triangles given query access to an -vertex graph. We describe optimal
quantum algorithms that perform and
queries respectively. This is
a quadratic speed-up compared to the classical complexity of these problems.
For this purpose we develop a new quantum paradigm that we call Quantum
Chebyshev's inequality. Namely we demonstrate that, in a certain model of
quantum sampling, one can approximate with relative error the mean of any
random variable with a number of quantum samples that is linear in the ratio of
the square root of the variance to the mean. Classically the dependency is
quadratic. Our algorithm subsumes a previous result of Montanaro [Mon15]. This
new paradigm is based on a refinement of the Amplitude Estimation algorithm of
Brassard et al. [BHMT02] and of previous quantum algorithms for the mean
estimation problem. We show that this speed-up is optimal, and we identify
another common model of quantum sampling where it cannot be obtained. For our
applications, we also adapt the variable-time amplitude amplification technique
of Ambainis [Amb10] into a variable-time amplitude estimation algorithm.Comment: 27 pages; v3: better presentation, lower bound in Theorem 4.3 is ne
Incidence Geometries and the Pass Complexity of Semi-Streaming Set Cover
Set cover, over a universe of size , may be modelled as a data-streaming
problem, where the sets that comprise the instance are to be read one by
one. A semi-streaming algorithm is allowed only space to process this stream. For each , we give a very
simple deterministic algorithm that makes passes over the input stream and
returns an appropriately certified -approximation to the
optimum set cover. More importantly, we proceed to show that this approximation
factor is essentially tight, by showing that a factor better than
is unachievable for a -pass semi-streaming
algorithm, even allowing randomisation. In particular, this implies that
achieving a -approximation requires
passes, which is tight up to the factor. These results extend to a
relaxation of the set cover problem where we are allowed to leave an
fraction of the universe uncovered: the tight bounds on the best
approximation factor achievable in passes turn out to be
. Our lower bounds are based
on a construction of a family of high-rank incidence geometries, which may be
thought of as vast generalisations of affine planes. This construction, based
on algebraic techniques, appears flexible enough to find other applications and
is therefore interesting in its own right.Comment: 20 page
Automata Theory on Sliding Windows
In a recent paper we analyzed the space complexity of streaming algorithms whose goal is to decide membership of a sliding window to a fixed language. For the class of regular languages we proved a space trichotomy theorem: for every regular language the optimal space bound is either constant, logarithmic or linear. In this paper we continue this line of research: We present natural characterizations for the constant and logarithmic space classes and establish tight relationships to the concept of language growth. We also analyze the space complexity with respect to automata size and prove almost matching lower and upper bounds. Finally, we consider the decision problem whether a language given by a DFA/NFA admits a sliding window algorithm using logarithmic/constant space
Low-Latency Sliding Window Algorithms for Formal Languages
Low-latency sliding window algorithms for regular and context-free languages
are studied, where latency refers to the worst-case time spent for a single
window update or query. For every regular language it is shown that there
exists a constant-latency solution that supports adding and removing symbols
independently on both ends of the window (the so-called two-way variable-size
model). We prove that this result extends to all visibly pushdown languages.
For deterministic 1-counter languages we present a
latency sliding window algorithm for the two-way variable-size model where
refers to the window size. We complement these results with a conditional lower
bound: there exists a fixed real-time deterministic context-free language
such that, assuming the OMV (online matrix vector multiplication) conjecture,
there is no sliding window algorithm for with latency
for any , even in the most restricted sliding window model (one-way
fixed-size model). The above mentioned results all refer to the unit-cost RAM
model with logarithmic word size. For regular languages we also present a
refined picture using word sizes , ,
and .Comment: A short version will be presented at the conference FSTTCS 202