14 research outputs found

    Alternative parameterizations of Metric Dimension

    Get PDF
    A set of vertices WW in a graph GG is called resolving if for any two distinct x,yV(G)x,y\in V(G), there is vWv\in W such that distG(v,x)distG(v,y){\rm dist}_G(v,x)\neq{\rm dist}_G(v,y), where distG(u,v){\rm dist}_G(u,v) denotes the length of a shortest path between uu and vv in the graph GG. The metric dimension md(G){\rm md}(G) of GG is the minimum cardinality of a resolving set. The Metric Dimension problem, i.e. deciding whether md(G)k{\rm md}(G)\le k, is NP-complete even for interval graphs (Foucaud et al., 2017). We study Metric Dimension (for arbitrary graphs) from the lens of parameterized complexity. The problem parameterized by kk was proved to be W[2]W[2]-hard by Hartung and Nichterlein (2013) and we study the dual parameterization, i.e., the problem of whether md(G)nk,{\rm md}(G)\le n- k, where nn is the order of GG. We prove that the dual parameterization admits (a) a kernel with at most 3k43k^4 vertices and (b) an algorithm of runtime O(4k+o(k)).O^*(4^{k+o(k)}). Hartung and Nichterlein (2013) also observed that Metric Dimension is fixed-parameter tractable when parameterized by the vertex cover number vc(G)vc(G) of the input graph. We complement this observation by showing that it does not admit a polynomial kernel even when parameterized by vc(G)+kvc(G) + k. Our reduction also gives evidence for non-existence of polynomial Turing kernels

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    Dynamic planar embedding is in DynFO

    Get PDF
    Planar Embedding is a drawing of a graph on the plane such that the edges do not intersect each other except at the vertices. We know that testing the planarity of a graph and computing its embedding (if it exists), can efficiently be computed, both sequentially [John E. Hopcroft and Robert Endre Tarjan, 1974] and in parallel [Vijaya Ramachandran and John H. Reif, 1994], when the entire graph is presented as input. In the dynamic setting, the input graph changes one edge at a time through insertion and deletions and planarity testing/embedding has to be updated after every change. By storing auxilliary information we can improve the complexity of dynamic planarity testing/embedding over the obvious recomputation from scratch. In the sequential dynamic setting, there has been a series of works [David Eppstein et al., 1996; Giuseppe F. Italiano et al., 1993; Jacob Holm et al., 2018; Jacob Holm and Eva Rotenberg, 2020], culminating in the breakthrough result of polylog(n) sequential time (amortized) planarity testing algorithm of Holm and Rotenberg [Jacob Holm and Eva Rotenberg, 2020]. In this paper we study planar embedding through the lens of DynFO, a parallel dynamic complexity class introduced by Patnaik et al [Sushant Patnaik and Neil Immerman, 1997] (also [Guozhu Dong et al., 1995]). We show that it is possible to dynamically maintain whether an edge can be inserted to a planar graph without causing non-planarity in DynFO. We extend this to show how to maintain an embedding of a planar graph under both edge insertions and deletions, while rejecting edge insertions that violate planarity. Our main idea is to maintain embeddings of only the triconnected components and a special two-colouring of separating pairs that enables us to side-step cascading flips when embedding of a biconnected planar graph changes, a major issue for sequential dynamic algorithms [Jacob Holm and Eva Rotenberg, 2020; Jacob Holm and Eva Rotenberg, 2020]

    Towards Multiparty Computation Withstanding Coercion of All Parties

    Get PDF
    Incoercible multi-party computation (Canetti-Gennaro ’96) allows parties to engage in secure computation with the additional guarantee that the public transcript of the computation cannot be used by a coercive outsider to verify representations made by the parties regarding their inputs, outputs, and local random choices. That is, it is guaranteed that the only deductions regarding the truthfulness of such representations, made by an outsider who has witnessed the communication among the parties, are the ones that can be drawn just from the represented inputs and outputs alone. To date, all incoercible secure computation protocols withstand coercion of only a fraction of the parties, or else assume that all parties use an execution environment that makes some crucial parts of their local states physically inaccessible even to themselves. We consider, for the first time, the setting where all parties are coerced, and the coercer expects to see the entire history of the computation. We allow both protocol participants and external attackers to access a common reference string which is generated once and for all by an uncorruptable trusted party. In this setting we construct: - A general multi-party function evaluation protocol, for any number of parties, that withstands coercion of all parties, as long as all parties use the prescribed ``faking algorithm\u27\u27 upon coercion. This holds even if the inputs and outputs represented by coerced parties are globally inconsistent with the evaluated function. - A general two-party function evaluation protocol that withstands even the %``mixed\u27\u27 case where some of the coerced parties do follow the prescribed faking algorithm. (For instance, these parties might collude with the coercer and disclose their true local states.) This protocol is limited to functions where the input of at least one of the parties is taken from a small (poly-size) domain. It uses fully deniable encryption with public deniability for one of the parties; when instantiated using the fully deniable encryption of Canetti, Park, and Poburinnaya (Crypto\u2720), it takes 3 rounds of communication. Both protocols operate in the common reference string model, and use fully bideniable encryption (Canetti Park and Poburinnaya, Crypto\u2720) and sub-exponential indistinguishability obfuscation. Finally, we show that protocols with certain communication pattern cannot be incoercible, even in a weaker setting where only some parties are coerced

    New Approximation Bounds for Small-Set Vertex Expansion

    Full text link
    The vertex expansion of the graph is a fundamental graph parameter. Given a graph G=(V,E)G=(V,E) and a parameter δ(0,1/2]\delta \in (0,1/2], its δ\delta-Small-Set Vertex Expansion (SSVE) is defined as minS:S=δVV(S)min{S,Sc} \min_{S : |S| = \delta |V|} \frac{|{\partial^V(S)}|}{ \min \{ |S|, |S^c| \} } where V(S)\partial^V(S) is the vertex boundary of a set SS. The SSVE~problem, in addition to being of independent interest as a natural graph partitioning problem, is also of interest due to its connections to the Strong Unique Games problem. We give a randomized algorithm running in time npoly(1/δ)n^{{\sf poly}(1/\delta)}, which outputs a set SS of size Θ(δn)\Theta(\delta n), having vertex expansion at most max(O(ϕlogdlog(1/δ)),O~(dlog2(1/δ))ϕ), \max\left(O(\sqrt{\phi^* \log d \log (1/\delta)}) , \tilde{O}(d\log^2(1/\delta)) \cdot \phi^* \right), where dd is the largest vertex degree of the graph, and ϕ\phi^* is the optimal δ\delta-SSVE. The previous best-known guarantees for this were the bi-criteria bounds of O~(1/δ)ϕlogd\tilde{O}(1/\delta)\sqrt{\phi^* \log d} and O~(1/δ)ϕlogn\tilde{O}(1/\delta)\phi^* \sqrt{\log n} due to Louis-Makarychev [TOC'16]. Our algorithm uses the basic SDP relaxation of the problem augmented with poly(1/δ){\rm poly}(1/\delta) rounds of the Lasserre/SoS hierarchy. Our rounding algorithm is a combination of the rounding algorithms of Raghavendra-Tan [SODA'12] and Austrin-Benabbas-Georgiou [SODA'13]. A key component of our analysis is novel Gaussian rounding lemma for hyperedges which might be of independent interest.Comment: 55 Page

    Approximation Algorithms and Hardness for nn-Pairs Shortest Paths and All-Nodes Shortest Cycles

    Full text link
    We study the approximability of two related problems on graphs with nn nodes and mm edges: nn-Pairs Shortest Paths (nn-PSP), where the goal is to find a shortest path between O(n)O(n) prespecified pairs, and All Node Shortest Cycles (ANSC), where the goal is to find the shortest cycle passing through each node. Approximate nn-PSP has been previously studied, mostly in the context of distance oracles. We ask the question of whether approximate nn-PSP can be solved faster than by using distance oracles or All Pair Shortest Paths (APSP). ANSC has also been studied previously, but only in terms of exact algorithms, rather than approximation. We provide a thorough study of the approximability of nn-PSP and ANSC, providing a wide array of algorithms and conditional lower bounds that trade off between running time and approximation ratio. A highlight of our conditional lower bounds results is that for any integer k1k\ge 1, under the combinatorial 4k4k-clique hypothesis, there is no combinatorial algorithm for unweighted undirected nn-PSP with approximation ratio better than 1+1/k1+1/k that runs in O(m22/(k+1)n1/(k+1)ϵ)O(m^{2-2/(k+1)}n^{1/(k+1)-\epsilon}) time. This nearly matches an upper bound implied by the result of Agarwal (2014). A highlight of our algorithmic results is that one can solve both nn-PSP and ANSC in O~(m+n3/2+ϵ)\tilde O(m+ n^{3/2+\epsilon}) time with approximation factor 2+ϵ2+\epsilon (and additive error that is function of ϵ\epsilon), for any constant ϵ>0\epsilon>0. For nn-PSP, our conditional lower bounds imply that this approximation ratio is nearly optimal for any subquadratic-time combinatorial algorithm. We further extend these algorithms for nn-PSP and ANSC to obtain a time/accuracy trade-off that includes near-linear time algorithms.Comment: Abstract truncated to meet arXiv requirement. To appear in FOCS 202

    Quantum Garbled Circuits

    Get PDF
    We present a garbling scheme for quantum circuits, thus achieving a decomposable randomized encoding scheme for quantum computation. Specifically, we show how to compute an encoding of a given quantum circuit and quantum input, from which it is possible to derive the output of the computation and nothing else. In the classical setting, garbled circuits (and randomized encodings in general) are a versatile cryptographic tool with many applications such as secure multiparty computation, delegated computation, depth-reduction of cryptographic primitives, complexity lower-bounds, and more. However, a quantum analogue for garbling general circuits was not known prior to this work. We hope that our quantum randomized encoding scheme can similarly be useful for applications in quantum computing and cryptography. To illustrate the usefulness of quantum randomized encoding, we use it to design a conceptually-simple zero-knowledge (ZK) proof system for the complexity class QMA\mathbf{QMA}. Our protocol has the so-called Σ\Sigma format with a single-bit challenge, and allows the inputs to be delayed to the last round. The only previously-known ZK Σ\Sigma-protocol for QMA\mathbf{QMA} is due to Broadbent and Grilo (FOCS 2020), which does not have the aforementioned properties.Comment: 66 pages. Updated the erroneous claim from v1 about the complexity of information-theoretic QRE as matching the classical case. Added an application of QRE to zero-knowledge for QM
    corecore