3,615 research outputs found

    Hardness of Sparse Sets and Minimal Circuit Size Problem

    Get PDF
    We develop a polynomial method on finite fields to amplify the hardness of spare sets in nondeterministic time complexity classes on a randomized streaming model. One of our results shows that if there exists a 2no(1)2^{n^{o(1)}}-sparse set in NTIME(2no(1))NTIME(2^{n^{o(1)}}) that does not have any randomized streaming algorithm with no(1)n^{o(1)} updating time, and no(1)n^{o(1)} space, then NEXPBPPNEXP\not=BPP, where a f(n)f(n)-sparse set is a language that has at most f(n)f(n) strings of length nn. We also show that if MCSP is ZPPZPP-hard under polynomial time truth-table reductions, then EXPZPPEXP\not=ZPP

    Hardness of Sparse Sets and Minimal Circuit Size Problem

    Get PDF
    We study the magnification of hardness of sparse sets in nondeterministic time complexity classes on a randomized streaming model. One of our results shows that if there exists a 2no(1) -sparse set in NDTIME(2no(1)) that does not have any randomized streaming algorithm with no(1) updating time, and no(1) space, then NEXP≠BPP , where a f(n)-sparse set is a language that has at most f(n) strings of length n. We also show that if MCSP is ZPP -hard under polynomial time truth-table reductions, then EXP≠ZPP

    Counting Value Sets: Algorithm and Complexity

    Full text link
    Let pp be a prime. Given a polynomial in \F_{p^m}[x] of degree dd over the finite field \F_{p^m}, one can view it as a map from \F_{p^m} to \F_{p^m}, and examine the image of this map, also known as the value set. In this paper, we present the first non-trivial algorithm and the first complexity result on computing the cardinality of this value set. We show an elementary connection between this cardinality and the number of points on a family of varieties in affine space. We then apply Lauder and Wan's pp-adic point-counting algorithm to count these points, resulting in a non-trivial algorithm for calculating the cardinality of the value set. The running time of our algorithm is (pmd)O(d)(pmd)^{O(d)}. In particular, this is a polynomial time algorithm for fixed dd if pp is reasonably small. We also show that the problem is #P-hard when the polynomial is given in a sparse representation, p=2p=2, and mm is allowed to vary, or when the polynomial is given as a straight-line program, m=1m=1 and pp is allowed to vary. Additionally, we prove that it is NP-hard to decide whether a polynomial represented by a straight-line program has a root in a prime-order finite field, thus resolving an open problem proposed by Kaltofen and Koiran in \cite{Kaltofen03,KaltofenKo05}

    Readiness of Quantum Optimization Machines for Industrial Applications

    Full text link
    There have been multiple attempts to demonstrate that quantum annealing and, in particular, quantum annealing on quantum annealing machines, has the potential to outperform current classical optimization algorithms implemented on CMOS technologies. The benchmarking of these devices has been controversial. Initially, random spin-glass problems were used, however, these were quickly shown to be not well suited to detect any quantum speedup. Subsequently, benchmarking shifted to carefully crafted synthetic problems designed to highlight the quantum nature of the hardware while (often) ensuring that classical optimization techniques do not perform well on them. Even worse, to date a true sign of improved scaling with the number of problem variables remains elusive when compared to classical optimization techniques. Here, we analyze the readiness of quantum annealing machines for real-world application problems. These are typically not random and have an underlying structure that is hard to capture in synthetic benchmarks, thus posing unexpected challenges for optimization techniques, both classical and quantum alike. We present a comprehensive computational scaling analysis of fault diagnosis in digital circuits, considering architectures beyond D-wave quantum annealers. We find that the instances generated from real data in multiplier circuits are harder than other representative random spin-glass benchmarks with a comparable number of variables. Although our results show that transverse-field quantum annealing is outperformed by state-of-the-art classical optimization algorithms, these benchmark instances are hard and small in the size of the input, therefore representing the first industrial application ideally suited for testing near-term quantum annealers and other quantum algorithmic strategies for optimization problems.Comment: 22 pages, 12 figures. Content updated according to Phys. Rev. Applied versio

    Hardness of Exact Distance Queries in Sparse Graphs Through Hub Labeling

    Full text link
    A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. An important class of distance labeling schemes is that of hub labelings, where a node vGv \in G stores its distance to the so-called hubs SvVS_v \subseteq V, chosen so that for any u,vVu,v \in V there is wSuSvw \in S_u \cap S_v belonging to some shortest uvuv path. Notice that for most existing graph classes, the best distance labelling constructions existing use at some point a hub labeling scheme at least as a key building block. Our interest lies in hub labelings of sparse graphs, i.e., those with E(G)=O(n)|E(G)| = O(n), for which we show a lowerbound of n2O(logn)\frac{n}{2^{O(\sqrt{\log n})}} for the average size of the hubsets. Additionally, we show a hub-labeling construction for sparse graphs of average size O(nRS(n)c)O(\frac{n}{RS(n)^{c}}) for some 0<c<10 < c < 1, where RS(n)RS(n) is the so-called Ruzsa-Szemer{\'e}di function, linked to structure of induced matchings in dense graphs. This implies that further improving the lower bound on hub labeling size to n2(logn)o(1)\frac{n}{2^{(\log n)^{o(1)}}} would require a breakthrough in the study of lower bounds on RS(n)RS(n), which have resisted substantial improvement in the last 70 years. For general distance labeling of sparse graphs, we show a lowerbound of 12O(logn)SumIndex(n)\frac{1}{2^{O(\sqrt{\log n})}} SumIndex(n), where SumIndex(n)SumIndex(n) is the communication complexity of the Sum-Index problem over ZnZ_n. Our results suggest that the best achievable hub-label size and distance-label size in sparse graphs may be Θ(n2(logn)c)\Theta(\frac{n}{2^{(\log n)^c}}) for some 0<c<10<c < 1

    Static Data Structure Lower Bounds Imply Rigidity

    Full text link
    We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of tω(log2n)t \geq \omega(\log^2 n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s=(1+ε)n)(s= (1+\varepsilon)n), would already imply a semi-explicit (PNP\bf P^{NP}\rm) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy and Yekhanin, 2009). Our results further assert that polynomial (tnδt\geq n^{\delta}) data structure lower bounds against near-optimal space, would imply super-linear circuit lower bounds for log-depth linear circuits (a four-decade open question). In the succinct space regime (s=n+o(n))(s=n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our results rely on a new connection between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak, 2006), and on a new reduction from worst-case to average-case rigidity, which is of independent interest

    The Complexity of Kings

    Full text link
    A king in a directed graph is a node from which each node in the graph can be reached via paths of length at most two. There is a broad literature on tournaments (completely oriented digraphs), and it has been known for more than half a century that all tournaments have at least one king [Lan53]. Recently, kings have proven useful in theoretical computer science, in particular in the study of the complexity of the semifeasible sets [HNP98,HT05] and in the study of the complexity of reachability problems [Tan01,NT02]. In this paper, we study the complexity of recognizing kings. For each succinctly specified family of tournaments, the king problem is known to belong to Π2p\Pi_2^p [HOZZ]. We prove that this bound is optimal: We construct a succinctly specified tournament family whose king problem is Π2p\Pi_2^p-complete. It follows easily from our proof approach that the problem of testing kingship in succinctly specified graphs (which need not be tournaments) is Π2p\Pi_2^p-complete. We also obtain Π2p\Pi_2^p-completeness results for k-kings in succinctly specified j-partite tournaments, k,j2k,j \geq 2, and we generalize our main construction to show that Π2p\Pi_2^p-completeness holds for testing k-kingship in succinctly specified families of tournaments for all k2k \geq 2
    corecore