224 research outputs found
Improved approximation for Fr\'echet distance on c-packed curves matching conditional lower bounds
The Fr\'echet distance is a well-studied and very popular measure of
similarity of two curves. The best known algorithms have quadratic time
complexity, which has recently been shown to be optimal assuming the Strong
Exponential Time Hypothesis (SETH) [Bringmann FOCS'14].
To overcome the worst-case quadratic time barrier, restricted classes of
curves have been studied that attempt to capture realistic input curves. The
most popular such class are c-packed curves, for which the Fr\'echet distance
has a -approximation in time [Driemel
et al. DCG'12]. In dimension this cannot be improved to
for any unless SETH fails
[Bringmann FOCS'14].
In this paper, exploiting properties that prevent stronger lower bounds, we
present an improved algorithm with runtime .
This is optimal in high dimensions apart from lower order factors unless SETH
fails. Our main new ingredients are as follows: For filling the classical
free-space diagram we project short subcurves onto a line, which yields
one-dimensional separated curves with roughly the same pairwise distances
between vertices. Then we tackle this special case in near-linear time by
carefully extending a greedy algorithm for the Fr\'echet distance of
one-dimensional separated curves
Improved Protocols and Hardness Results for the Two-Player Cryptogenography Problem
The cryptogenography problem, introduced by Brody, Jakobsen, Scheder, and
Winkler (ITCS 2014), is to collaboratively leak a piece of information known to
only one member of a group (i)~without revealing who was the origin of this
information and (ii)~without any private communication, neither during the
process nor before. Despite several deep structural results, even the smallest
case of leaking one bit of information present at one of two players is not
well understood. Brody et al.\ gave a 2-round protocol enabling the two players
to succeed with probability and showed the hardness result that no
protocol can give a success probability of more than~.
In this work, we show that neither bound is tight. Our new hardness result,
obtained by a different application of the concavity method used also in the
previous work, states that a success probability better than 0.3672 is not
possible. Using both theoretical and numerical approaches, we improve the lower
bound to , that is, give a protocol leading to this success
probability. To ease the design of new protocols, we prove an equivalent
formulation of the cryptogenography problem as solitaire vector splitting game.
Via an automated game tree search, we find good strategies for this game. We
then translate the splits that occurred in this strategy into inequalities
relating position values and use an LP solver to find an optimal solution for
these inequalities. This gives slightly better game values, but more
importantly, it gives a more compact representation of the protocol and a way
to easily verify the claimed quality of the protocol.
These improved bounds, as well as the large sizes and depths of the improved
protocols we find, suggests that finding good protocols for the
cryptogenography problem as well as understanding their structure are harder
than what the simple problem formulation suggests
Multivariate Fine-Grained Complexity of Longest Common Subsequence
We revisit the classic combinatorial pattern matching problem of finding a
longest common subsequence (LCS). For strings and of length , a
textbook algorithm solves LCS in time , but although much effort has
been spent, no -time algorithm is known. Recent work
indeed shows that such an algorithm would refute the Strong Exponential Time
Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann,
K\"unnemann FOCS'15].
Despite the quadratic-time barrier, for over 40 years an enduring scientific
interest continued to produce fast algorithms for LCS and its variations.
Particular attention was put into identifying and exploiting input parameters
that yield strongly subquadratic time algorithms for special cases of interest,
e.g., differential file comparison. This line of research was successfully
pursued until 1990, at which time significant improvements came to a halt. In
this paper, using the lens of fine-grained complexity, our goal is to (1)
justify the lack of further improvements and (2) determine whether some special
cases of LCS admit faster algorithms than currently known.
To this end, we provide a systematic study of the multivariate complexity of
LCS, taking into account all parameters previously discussed in the literature:
the input size , the length of the shorter string
, the length of an LCS of and , the numbers of
deletions and , the alphabet size, as well as
the numbers of matching pairs and dominant pairs . For any class of
instances defined by fixing each parameter individually to a polynomial in
terms of the input size, we prove a SETH-based lower bound matching one of
three known algorithms. Specifically, we determine the optimal running time for
LCS under SETH as .
[...]Comment: Presented at SODA'18. Full Version. 66 page
Automated analysis of security protocols with global state
Security APIs, key servers and protocols that need to keep the status of
transactions, require to maintain a global, non-monotonic state, e.g., in the
form of a database or register. However, most existing automated verification
tools do not support the analysis of such stateful security protocols -
sometimes because of fundamental reasons, such as the encoding of the protocol
as Horn clauses, which are inherently monotonic. A notable exception is the
recent tamarin prover which allows specifying protocols as multiset rewrite
(msr) rules, a formalism expressive enough to encode state. As multiset
rewriting is a "low-level" specification language with no direct support for
concurrent message passing, encoding protocols correctly is a difficult and
error-prone process. We propose a process calculus which is a variant of the
applied pi calculus with constructs for manipulation of a global state by
processes running in parallel. We show that this language can be translated to
msr rules whilst preserving all security properties expressible in a dedicated
first-order logic for security properties. The translation has been implemented
in a prototype tool which uses the tamarin prover as a backend. We apply the
tool to several case studies among which a simplified fragment of PKCS\#11, the
Yubikey security token, and an optimistic contract signing protocol
Quasirandom Rumor Spreading: An Experimental Analysis
We empirically analyze two versions of the well-known "randomized rumor
spreading" protocol to disseminate a piece of information in networks. In the
classical model, in each round each informed node informs a random neighbor. In
the recently proposed quasirandom variant, each node has a (cyclic) list of its
neighbors. Once informed, it starts at a random position of the list, but from
then on informs its neighbors in the order of the list. While for sparse random
graphs a better performance of the quasirandom model could be proven, all other
results show that, independent of the structure of the lists, the same
asymptotic performance guarantees hold as for the classical model. In this
work, we compare the two models experimentally. This not only shows that the
quasirandom model generally is faster, but also that the runtime is more
concentrated around the mean. This is surprising given that much fewer random
bits are used in the quasirandom process. These advantages are also observed in
a lossy communication model, where each transmission does not reach its target
with a certain probability, and in an asynchronous model, where nodes send at
random times drawn from an exponential distribution. We also show that
typically the particular structure of the lists has little influence on the
efficiency.Comment: 14 pages, appeared in ALENEX'0
On Nondeterministic Derandomization of {F}reivalds' Algorithm: {C}onsequences, Avenues and Algorithmic Progress
Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two matrices can be performed in near-optimal nondeterministic time . Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time , our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and erroneous entries can be performed in time -- interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most errors in time . To obtain this result, we show how to compute an integer matrix product with at most nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for nonzeroes, which is of independent interest
- …