5 research outputs found

    Labelings vs. Embeddings: On Distributed Representations of Distances

    Full text link
    We investigate for which metric spaces the performance of distance labeling and of β„“βˆž\ell_\infty-embeddings differ, and how significant can this difference be. Recall that a distance labeling is a distributed representation of distances in a metric space (X,d)(X,d), where each point x∈Xx\in X is assigned a succinct label, such that the distance between any two points x,y∈Xx,y \in X can be approximated given only their labels. A highly structured special case is an embedding into β„“βˆž\ell_\infty, where each point x∈Xx\in X is assigned a vector f(x)f(x) such that βˆ₯f(x)βˆ’f(y)βˆ₯∞\|f(x)-f(y)\|_\infty is approximately d(x,y)d(x,y). The performance of a distance labeling or an β„“βˆž\ell_\infty-embedding is measured via its distortion and its label-size/dimension. We also study the analogous question for the prioritized versions of these two measures. Here, a priority order Ο€=(x1,…,xn)\pi=(x_1,\dots,x_n) of the point set XX is given, and higher-priority points should have shorter labels. Formally, a distance labeling has prioritized label-size Ξ±(.)\alpha(.) if every xjx_j has label size at most Ξ±(j)\alpha(j). Similarly, an embedding f:Xβ†’β„“βˆžf: X \to \ell_\infty has prioritized dimension Ξ±(.)\alpha(.) if f(xj)f(x_j) is non-zero only in the first Ξ±(j)\alpha(j) coordinates. In addition, we compare these their prioritized measures to their classical (worst-case) versions. We answer these questions in several scenarios, uncovering a surprisingly diverse range of behaviors. First, in some cases labelings and embeddings have very similar worst-case performance, but in other cases there is a huge disparity. However in the prioritized setting, we most often find a strict separation between the performance of labelings and embeddings. And finally, when comparing the classical and prioritized settings, we find that the worst-case bound for label size often ``translates'' to a prioritized one, but also a surprising exception to this rule

    Scattering and Sparse Partitions, and Their Applications

    Get PDF

    Online Duet between Metric Embeddings and Minimum-Weight Perfect Matchings

    Full text link
    Low-distortional metric embeddings are a crucial component in the modern algorithmic toolkit. In an online metric embedding, points arrive sequentially and the goal is to embed them into a simple space irrevocably, while minimizing the distortion. Our first result is a deterministic online embedding of a general metric into Euclidean space with distortion O(log⁑n)β‹…min⁑{log⁑Φ,n}O(\log n)\cdot\min\{\sqrt{\log\Phi},\sqrt{n}\} (or, O(d)β‹…min⁑{log⁑Φ,n}O(d)\cdot\min\{\sqrt{\log\Phi},\sqrt{n}\} if the metric has doubling dimension dd), solving a conjecture by Newman and Rabinovich (2020), and quadratically improving the dependence on the aspect ratio Ξ¦\Phi from Indyk et al.\ (2010). Our second result is a stochastic embedding of a metric space into trees with expected distortion O(dβ‹…log⁑Φ)O(d\cdot \log\Phi), generalizing previous results (Indyk et al.\ (2010), Bartal et al.\ (2020)). Next, we study the \emph{online minimum-weight perfect matching} problem, where a sequence of 2n2n metric points arrive in pairs, and one has to maintain a perfect matching at all times. We allow recourse (as otherwise the order of arrival determines the matching). The goal is to return a perfect matching that approximates the \emph{minimum-weight} perfect matching at all times, while minimizing the recourse. Our third result is a randomized algorithm with competitive ratio O(dβ‹…log⁑Φ)O(d\cdot \log \Phi) and recourse O(log⁑Φ)O(\log \Phi) against an oblivious adversary, this result is obtained via our new stochastic online embedding. Our fourth result is a deterministic algorithm against an adaptive adversary, using O(log⁑2n)O(\log^2 n) recourse, that maintains a matching of weight at most O(log⁑n)O(\log n) times the weight of the MST, i.e., a matching of lightness O(log⁑n)O(\log n). We complement our upper bounds with a strategy for an oblivious adversary that, with recourse rr, establishes a lower bound of Ξ©(log⁑nrlog⁑r)\Omega(\frac{\log n}{r \log r}) for both competitive ratio and lightness.Comment: 53 pages, 8 figures, to be presented at the ACM-SIAM Symposium on Discrete Algorithms (SODA24
    corecore