5 research outputs found
Labelings vs. Embeddings: On Distributed Representations of Distances
We investigate for which metric spaces the performance of distance labeling
and of -embeddings differ, and how significant can this difference
be. Recall that a distance labeling is a distributed representation of
distances in a metric space , where each point is assigned a
succinct label, such that the distance between any two points can
be approximated given only their labels. A highly structured special case is an
embedding into , where each point is assigned a vector
such that is approximately . The
performance of a distance labeling or an -embedding is measured
via its distortion and its label-size/dimension.
We also study the analogous question for the prioritized versions of these
two measures. Here, a priority order of the point set
is given, and higher-priority points should have shorter labels. Formally, a
distance labeling has prioritized label-size if every has
label size at most . Similarly, an embedding
has prioritized dimension if is non-zero only in the first
coordinates. In addition, we compare these their prioritized
measures to their classical (worst-case) versions.
We answer these questions in several scenarios, uncovering a surprisingly
diverse range of behaviors. First, in some cases labelings and embeddings have
very similar worst-case performance, but in other cases there is a huge
disparity. However in the prioritized setting, we most often find a strict
separation between the performance of labelings and embeddings. And finally,
when comparing the classical and prioritized settings, we find that the
worst-case bound for label size often ``translates'' to a prioritized one, but
also a surprising exception to this rule
Online Duet between Metric Embeddings and Minimum-Weight Perfect Matchings
Low-distortional metric embeddings are a crucial component in the modern
algorithmic toolkit. In an online metric embedding, points arrive sequentially
and the goal is to embed them into a simple space irrevocably, while minimizing
the distortion. Our first result is a deterministic online embedding of a
general metric into Euclidean space with distortion (or,
if the metric has doubling
dimension ), solving a conjecture by Newman and Rabinovich (2020), and
quadratically improving the dependence on the aspect ratio from Indyk et
al.\ (2010). Our second result is a stochastic embedding of a metric space into
trees with expected distortion , generalizing previous
results (Indyk et al.\ (2010), Bartal et al.\ (2020)).
Next, we study the \emph{online minimum-weight perfect matching} problem,
where a sequence of metric points arrive in pairs, and one has to maintain
a perfect matching at all times. We allow recourse (as otherwise the order of
arrival determines the matching). The goal is to return a perfect matching that
approximates the \emph{minimum-weight} perfect matching at all times, while
minimizing the recourse. Our third result is a randomized algorithm with
competitive ratio and recourse against an
oblivious adversary, this result is obtained via our new stochastic online
embedding. Our fourth result is a deterministic algorithm against an adaptive
adversary, using recourse, that maintains a matching of weight at
most times the weight of the MST, i.e., a matching of lightness
. We complement our upper bounds with a strategy for an oblivious
adversary that, with recourse , establishes a lower bound of
for both competitive ratio and lightness.Comment: 53 pages, 8 figures, to be presented at the ACM-SIAM Symposium on
Discrete Algorithms (SODA24