44 research outputs found

### Counting Permutations Modulo Pattern-Replacement Equivalences for Three-Letter Patterns

We study a family of equivalence relations on $S_n$, the group of
permutations on $n$ letters, created in a manner similar to that of the Knuth
relation and the forgotten relation. For our purposes, two permutations are in
the same equivalence class if one can be reached from the other through a
series of pattern-replacements using patterns whose order permutations are in
the same part of a predetermined partition of $S_c$.
When the partition is of $S_3$ and has one nontrivial part and that part is
of size greater than two, we provide formulas for the number of classes created
in each previously unsolved case. When the partition is of $S_3$ and has two
nontrivial parts, each of size two (as do the Knuth and forgotten relations),
we enumerate the classes for $13$ of the $14$ unresolved cases. In two of these
cases, enumerations arise which are the same as those yielded by the Knuth and
forgotten relations. The reasons for this phenomenon are still largely a
mystery

### Dynamic Time Warping in Strongly Subquadratic Time: Algorithms for the Low-Distance Regime and Approximate Evaluation

Dynamic time warping distance (DTW) is a widely used distance measure between
time series. The best known algorithms for computing DTW run in near quadratic
time, and conditional lower bounds prohibit the existence of significantly
faster algorithms. The lower bounds do not prevent a faster algorithm for the
special case in which the DTW is small, however. For an arbitrary metric space
$\Sigma$ with distances normalized so that the smallest non-zero distance is
one, we present an algorithm which computes $\operatorname{dtw}(x, y)$ for two
strings $x$ and $y$ over $\Sigma$ in time $O(n \cdot \operatorname{dtw}(x,
y))$. We also present an approximation algorithm which computes
$\operatorname{dtw}(x, y)$ within a factor of $O(n^\epsilon)$ in time
$\tilde{O}(n^{2 - \epsilon})$ for $0 < \epsilon < 1$. The algorithm allows for
the strings $x$ and $y$ to be taken over an arbitrary well-separated tree
metric with logarithmic depth and at most exponential aspect ratio. Extending
our techniques further, we also obtain the first approximation algorithm for
edit distance to work with characters taken from an arbitrary metric space,
providing an $n^\epsilon$-approximation in time $\tilde{O}(n^{2 - \epsilon})$,
with high probability. Additionally, we present a simple reduction from
computing edit distance to computing DTW. Applying our reduction to a
conditional lower bound of Bringmann and K\"unnemann pertaining to edit
distance over $\{0, 1\}$, we obtain a conditional lower bound for computing DTW
over a three letter alphabet (with distances of zero and one). This improves on
a previous result of Abboud, Backurs, and Williams. With a similar approach, we
prove a reduction from computing edit distance to computing longest LCS length.
This means that one can recover conditional lower bounds for LCS directly from
those for edit distance, which was not previously thought to be the case

### A Hash Table Without Hash Functions, and How to Get the Most Out of Your Random Bits

This paper considers the basic question of how strong of a probabilistic
guarantee can a hash table, storing $n$ $(1 + \Theta(1)) \log n$-bit key/value
pairs, offer? Past work on this question has been bottlenecked by limitations
of the known families of hash functions: The only hash tables to achieve
failure probabilities less than 1 / 2^{\polylog n} require access to
fully-random hash functions -- if the same hash tables are implemented using
the known explicit families of hash functions, their failure probabilities
become 1 / \poly(n).
To get around these obstacles, we show how to construct a randomized data
structure that has the same guarantees as a hash table, but that \emph{avoids
the direct use of hash functions}. Building on this, we are able to construct a
hash table using $O(n)$ random bits that achieves failure probability $1 /
n^{n^{1 - \epsilon}}$ for an arbitrary positive constant $\epsilon$.
In fact, we show that this guarantee can even be achieved by a \emph{succinct
dictionary}, that is, by a dictionary that uses space within a $1 + o(1)$
factor of the information-theoretic optimum.
Finally we also construct a succinct hash table whose probabilistic
guarantees fall on a different extreme, offering a failure probability of 1 /
\poly(n) while using only $\tilde{O}(\log n)$ random bits. This latter result
matches (up to low-order terms) a guarantee previously achieved by
Dietzfelbinger et al., but with increased space efficiency and with several
surprising technical components