1,762 research outputs found

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    An Efficient Representation for Filtrations of Simplicial Complexes

    Get PDF
    A filtration over a simplicial complex KK is an ordering of the simplices of KK such that all prefixes in the ordering are subcomplexes of KK. Filtrations are at the core of Persistent Homology, a major tool in Topological Data Analysis. In order to represent the filtration of a simplicial complex, the entire filtration can be appended to any data structure that explicitly stores all the simplices of the complex such as the Hasse diagram or the recently introduced Simplex Tree [Algorithmica '14]. However, with the popularity of various computational methods that need to handle simplicial complexes, and with the rapidly increasing size of the complexes, the task of finding a compact data structure that can still support efficient queries is of great interest. In this paper, we propose a new data structure called the Critical Simplex Diagram (CSD) which is a variant of the Simplex Array List (SAL) [Algorithmica '17]. Our data structure allows one to store in a compact way the filtration of a simplicial complex, and allows for the efficient implementation of a large range of basic operations. Moreover, we prove that our data structure is essentially optimal with respect to the requisite storage space. Finally, we show that the CSD representation admits fast construction algorithms for Flag complexes and relaxed Delaunay complexes.Comment: A preliminary version appeared in SODA 201

    Building Efficient and Compact Data Structures for Simplicial Complexes

    Get PDF
    The Simplex Tree (ST) is a recently introduced data structure that can represent abstract simplicial complexes of any dimension and allows efficient implementation of a large range of basic operations on simplicial complexes. In this paper, we show how to optimally compress the Simplex Tree while retaining its functionalities. In addition, we propose two new data structures called the Maximal Simplex Tree (MxST) and the Simplex Array List (SAL). We analyze the compressed Simplex Tree, the Maximal Simplex Tree, and the Simplex Array List under various settings.Comment: An extended abstract appeared in the proceedings of SoCG 201

    On the Sensitivity Conjecture for Disjunctive Normal Forms

    Get PDF
    The sensitivity conjecture of Nisan and Szegedy [CC\u2794] asks whether for any Boolean function f, the maximum sensitivity s(f), is polynomially related to its block sensitivity bs(f), and hence to other major complexity measures. Despite major advances in the analysis of Boolean functions over the last decade, the problem remains widely open. In this paper, we consider a restriction on the class of Boolean functions through a model of computation (DNF), and refer to the functions adhering to this restriction as admitting the Normalized Block property. We prove that for any function f admitting the Normalized Block property, bs(f) <= 4 * s(f)^2. We note that (almost) all the functions mentioned in literature that achieve a quadratic separation between sensitivity and block sensitivity admit the Normalized Block property. Recently, Gopalan et al. [ITCS\u2716] showed that every Boolean function f is uniquely specified by its values on a Hamming ball of radius at most 2 * s(f). We extend this result and also construct examples of Boolean functions which provide the matching lower bounds

    Ham Sandwich is Equivalent to Borsuk-Ulam

    Get PDF
    The Borsuk-Ulam theorem is a fundamental result in algebraic topology, with applications to various areas of Mathematics. A classical application of the Borsuk-Ulam theorem is the Ham Sandwich theorem: The volumes of any n compact sets in R^n can always be simultaneously bisected by an (n-1)-dimensional hyperplane. In this paper, we demonstrate the equivalence between the Borsuk-Ulam theorem and the Ham Sandwich theorem. The main technical result we show towards establishing the equivalence is the following: For every odd polynomial restricted to the hypersphere f:S^n->R, there exists a compact set A in R^{n+1}, such that for every x in S^n we have f(x)=vol(A cap H^+) - vol(A cap H^-), where H is the oriented hyperplane containing the origin with x as the normal. A noteworthy aspect of the proof of the above result is the use of hyperspherical harmonics. Finally, using the above result we prove that there exist constants n_0, epsilon_0>0 such that for every n>= n_0 and epsilon <= epsilon_0/sqrt{48n}, any query algorithm to find an epsilon-bisecting (n-1)-dimensional hyperplane of n compact set in [-n^4.51,n^4.51]^n, even with success probability 2^-Omega(n), requires 2^Omega(n) queries

    Deterministic Replacement Path Covering

    Full text link
    In this article, we provide a unified and simplified approach to derandomize central results in the area of fault-tolerant graph algorithms. Given a graph GG, a vertex pair (s,t)∈V(G)×V(G)(s,t) \in V(G)\times V(G), and a set of edge faults F⊆E(G)F \subseteq E(G), a replacement path P(s,t,F)P(s,t,F) is an ss-tt shortest path in G∖FG \setminus F. For integer parameters L,fL,f, a replacement path covering (RPC) is a collection of subgraphs of GG, denoted by GL,f={G1,…,Gr}\mathcal{G}_{L,f}=\{G_1,\ldots, G_r \}, such that for every set FF of at most ff faults (i.e., ∣F∣≤f|F|\le f) and every replacement path P(s,t,F)P(s,t,F) of at most LL edges, there exists a subgraph Gi∈GL,fG_i\in \mathcal{G}_{L,f} that contains all the edges of PP and does not contain any of the edges of FF. The covering value of the RPC GL,f\mathcal{G}_{L,f} is then defined to be the number of subgraphs in GL,f\mathcal{G}_{L,f}. We present efficient deterministic constructions of (L,f)(L,f)-RPCs whose covering values almost match the randomized ones, for a wide range of parameters. Our time and value bounds improve considerably over the previous construction of Parter (DISC 2019). We also provide an almost matching lower bound for the value of these coverings. A key application of our above deterministic constructions is the derandomization of the algebraic construction of the distance sensitivity oracle by Weimann and Yuster (FOCS 2010). The preprocessing and query time of the our deterministic algorithm nearly match the randomized bounds. This resolves the open problem of Alon, Chechik and Cohen (ICALP 2019)

    On Inapproximability of Reconfiguration Problems: PSPACE-Hardness and some Tight NP-Hardness Results

    Full text link
    The field of combinatorial reconfiguration studies search problems with a focus on transforming one feasible solution into another. Recently, Ohsaka [STACS'23] put forth the Reconfiguration Inapproximability Hypothesis (RIH), which roughly asserts that for some ϵ>0\epsilon>0, given as input a kk-CSP instance (for some constant kk) over some constant sized alphabet, and two satisfying assignments ψs\psi_s and ψt\psi_t, it is PSPACE-hard to find a sequence of assignments starting from ψs\psi_s and ending at ψt\psi_t such that every assignment in the sequence satisfies at least (1−ϵ)(1-\epsilon) fraction of the constraints and also that every assignment in the sequence is obtained by changing its immediately preceding assignment (in the sequence) on exactly one variable. Assuming RIH, many important reconfiguration problems have been shown to be PSPACE-hard to approximate by Ohsaka [STACS'23; SODA'24]. In this paper, we prove RIH and establish the first (constant factor) PSPACE-hardness of approximation results for many reconfiguration problems, resolving an open question posed by Ito et al. [TCS'11]. Our proof uses known constructions of Probabilistically Checkable Proofs of Proximity (in a black-box manner) to create the gap. Independently, Hirahara and Ohsaka [STOC'24] have also proved RIH. We also prove that the aforementioned kk-CSP Reconfiguration problem is NP-hard to approximate to within a factor of 1/2+ϵ1/2 + \epsilon (for any ϵ>0\epsilon>0) when k=2k=2. We complement this with a (1/2−ϵ)(1/2 - \epsilon)-approximation polynomial time algorithm, which improves upon a (1/4−ϵ)(1/4 - \epsilon)-approximation algorithm of Ohsaka [2023] (again for any ϵ>0\epsilon>0). Finally, we show that Set Cover Reconfiguration is NP-hard to approximate to within a factor of 2−ϵ2 - \epsilon for any constant ϵ>0\epsilon > 0, which matches the simple linear-time 2-approximation algorithm by Ito et al. [TCS'11]

    On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic

    Get PDF
    Given a set of n points in R^d, the (monochromatic) Closest Pair problem asks to find a pair of distinct points in the set that are closest in the l_p-metric. Closest Pair is a fundamental problem in Computational Geometry and understanding its fine-grained complexity in the Euclidean metric when d=omega(log n) was raised as an open question in recent works (Abboud-Rubinstein-Williams [FOCS\u2717], Williams [SODA\u2718], David-Karthik-Laekhanukit [SoCG\u2718]). In this paper, we show that for every p in R_{>= 1} cup {0}, under the Strong Exponential Time Hypothesis (SETH), for every epsilon>0, the following holds: - No algorithm running in time O(n^{2-epsilon}) can solve the Closest Pair problem in d=(log n)^{Omega_{epsilon}(1)} dimensions in the l_p-metric. - There exists delta = delta(epsilon)>0 and c = c(epsilon)>= 1 such that no algorithm running in time O(n^{1.5-epsilon}) can approximate Closest Pair problem to a factor of (1+delta) in d >= c log n dimensions in the l_p-metric. In particular, our first result is shown by establishing the computational equivalence of the bichromatic Closest Pair problem and the (monochromatic) Closest Pair problem (up to n^{epsilon} factor in the running time) for d=(log n)^{Omega_epsilon(1)} dimensions. Additionally, under SETH, we rule out nearly-polynomial factor approximation algorithms running in subquadratic time for the (monochromatic) Maximum Inner Product problem where we are given a set of n points in n^{o(1)}-dimensional Euclidean space and are required to find a pair of distinct points in the set that maximize the inner product. At the heart of all our proofs is the construction of a dense bipartite graph with low contact dimension, i.e., we construct a balanced bipartite graph on n vertices with n^{2-epsilon} edges whose vertices can be realized as points in a (log n)^{Omega_epsilon(1)}-dimensional Euclidean space such that every pair of vertices which have an edge in the graph are at distance exactly 1 and every other pair of vertices are at distance greater than 1. This graph construction is inspired by the construction of locally dense codes introduced by Dumer-Miccancio-Sudan [IEEE Trans. Inf. Theory\u2703]
    • …
    corecore