2,966 research outputs found

    Distributed PCP Theorems for Hardness of Approximation in P

    Get PDF
    We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x{0,1}nx \in \{0,1\}^n to a CNF formula φ\varphi is shared between two parties, where Alice knows x1,,xn/2x_1, \dots, x_{n/2}, Bob knows xn/2+1,,xnx_{n/2+1},\dots,x_n, and both parties know φ\varphi. The goal is to have Alice and Bob jointly write a PCP that xx satisfies φ\varphi, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of xx. Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no truly-subquadratic approximation algorithms for Bichromatic Maximum Inner Product over {0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first two problems we obtain nearly-polynomial factors of 2(logn)1o(1)2^{(\log n)^{1-o(1)}}; only (1+o(1))(1+o(1))-factor lower bounds (under SETH) were known before

    Multi-Embedding of Metric Spaces

    Full text link
    Metric embedding has become a common technique in the design of algorithms. Its applicability is often dependent on how high the embedding's distortion is. For example, embedding finite metric space into trees may require linear distortion as a function of its size. Using probabilistic metric embeddings, the bound on the distortion reduces to logarithmic in the size. We make a step in the direction of bypassing the lower bound on the distortion in terms of the size of the metric. We define "multi-embeddings" of metric spaces in which a point is mapped onto a set of points, while keeping the target metric of polynomial size and preserving the distortion of paths. The distortion obtained with such multi-embeddings into ultrametrics is at most O(log Delta loglog Delta) where Delta is the aspect ratio of the metric. In particular, for expander graphs, we are able to obtain constant distortion embeddings into trees in contrast with the Omega(log n) lower bound for all previous notions of embeddings. We demonstrate the algorithmic application of the new embeddings for two optimization problems: group Steiner tree and metrical task systems

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    Parallel Metric Tree Embedding based on an Algebraic View on Moore-Bellman-Ford

    Full text link
    A \emph{metric tree embedding} of expected \emph{stretch~α1\alpha \geq 1} maps a weighted nn-node graph G=(V,E,ω)G = (V, E, \omega) to a weighted tree T=(VT,ET,ωT)T = (V_T, E_T, \omega_T) with VVTV \subseteq V_T such that, for all v,wVv,w \in V, dist(v,w,G)dist(v,w,T)\operatorname{dist}(v, w, G) \leq \operatorname{dist}(v, w, T) and operatornameE[dist(v,w,T)]αdist(v,w,G)operatorname{E}[\operatorname{dist}(v, w, T)] \leq \alpha \operatorname{dist}(v, w, G). Such embeddings are highly useful for designing fast approximation algorithms, as many hard problems are easy to solve on tree instances. However, to date the best parallel (polylogn)(\operatorname{polylog} n)-depth algorithm that achieves an asymptotically optimal expected stretch of αO(logn)\alpha \in \operatorname{O}(\log n) requires Ω(n2)\operatorname{\Omega}(n^2) work and a metric as input. In this paper, we show how to achieve the same guarantees using polylogn\operatorname{polylog} n depth and O~(m1+ϵ)\operatorname{\tilde{O}}(m^{1+\epsilon}) work, where m=Em = |E| and ϵ>0\epsilon > 0 is an arbitrarily small constant. Moreover, one may further reduce the work to O~(m+n1+ϵ)\operatorname{\tilde{O}}(m + n^{1+\epsilon}) at the expense of increasing the expected stretch to O(ϵ1logn)\operatorname{O}(\epsilon^{-1} \log n). Our main tool in deriving these parallel algorithms is an algebraic characterization of a generalization of the classic Moore-Bellman-Ford algorithm. We consider this framework, which subsumes a variety of previous "Moore-Bellman-Ford-like" algorithms, to be of independent interest and discuss it in depth. In our tree embedding algorithm, we leverage it for providing efficient query access to an approximate metric that allows sampling the tree using polylogn\operatorname{polylog} n depth and O~(m)\operatorname{\tilde{O}}(m) work. We illustrate the generality and versatility of our techniques by various examples and a number of additional results
    corecore