319 research outputs found

    Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk)

    Get PDF
    Algorithmic research strives to develop fast algorithms for fundamental problems. Despite its many successes, however, many problems still do not have very efficient algorithms. For years researchers have explained the hardness for key problems by proving NP-hardness, utilizing polynomial time reductions to base the hardness of key problems on the famous conjecture P != NP. For problems that already have polynomial time algorithms, however, it does not seem that one can show any sort of hardness based on P != NP. Nevertheless, we would like to provide evidence that a problem AA with a running time O(n^k) that has not been improved in decades, also requires n^{k-o(1)} time, thus explaining the lack of progress on the problem. Such unconditional time lower bounds seem very difficult to obtain, unfortunately. Recent work has concentrated on an approach mimicking NP-hardness: (1) select a few key problems that are conjectured to require T(n) time to solve, (2) use special, fine-grained reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of the key problems. In this abstract we outline the approach, give some examples of hardness results based on the Strong Exponential Time Hypothesis, and present an overview of some of the recent work on the topic

    Conditional Lower Bounds for Space/Time Tradeoffs

    Full text link
    In recent years much effort has been concentrated towards achieving polynomial time lower bounds on algorithms for solving various well-known problems. A useful technique for showing such lower bounds is to prove them conditionally based on well-studied hardness assumptions such as 3SUM, APSP, SETH, etc. This line of research helps to obtain a better understanding of the complexity inside P. A related question asks to prove conditional space lower bounds on data structures that are constructed to solve certain algorithmic tasks after an initial preprocessing stage. This question received little attention in previous research even though it has potential strong impact. In this paper we address this question and show that surprisingly many of the well-studied hard problems that are known to have conditional polynomial time lower bounds are also hard when concerning space. This hardness is shown as a tradeoff between the space consumed by the data structure and the time needed to answer queries. The tradeoff may be either smooth or admit one or more singularity points. We reveal interesting connections between different space hardness conjectures and present matching upper bounds. We also apply these hardness conjectures to both static and dynamic problems and prove their conditional space hardness. We believe that this novel framework of polynomial space conjectures can play an important role in expressing polynomial space lower bounds of many important algorithmic problems. Moreover, it seems that it can also help in achieving a better understanding of the hardness of their corresponding problems in terms of time

    Multi-Party Protocols, Information Complexity and Privacy

    Get PDF
    We introduce the new measure of Public Information Complexity (PIC), as a tool for the study of multi-party computation protocols, and of quantities such as their communication complexity, or the amount of randomness they require in the context of information-theoretic private computations. We are able to use this measure directly in the natural asynchronous message-passing peer-to-peer model and show a number of interesting properties and applications of our new notion: the Public Information Complexity is a lower bound on the Communication Complexity and an upper bound on the Information Complexity; the difference between the Public Information Complexity and the Information Complexity provides a lower bound on the amount of randomness used in a protocol; any communication protocol can be compressed to its Public Information Cost; an explicit calculation of the zero-error Public Information Complexity of the k-party, n-bit Parity function, where a player outputs the bit-wise parity of the inputs. The latter result establishes that the amount of randomness needed for a private protocol that computes this function is Omega(n)

    Small-depth Multilinear Formula Lower Bounds for Iterated Matrix Multiplication, with Applications

    Get PDF
    The complexity of Iterated Matrix Multiplication is a central theme in Computational Complexity theory, as the problem is closely related to the problem of separating various complexity classes within P. In this paper, we study the algebraic formula complexity of multiplying d many 2x2 matrices, denoted IMM_d, and show that the well-known divide-and-conquer algorithm cannot be significantly improved at any depth, as long as the formulas are multilinear. Formally, for each depth Delta <= log d, we show that any product-depth Delta multilinear formula for IMM_d must have size exp(Omega(Delta d^{1/Delta})). It also follows from this that any multilinear circuit of product-depth Delta for the same polynomial of the above form must have a size of exp(Omega(d^{1/Delta})). In particular, any polynomial-sized multilinear formula for IMM_d must have depth Omega(log d), and any polynomial-sized multilinear circuit for IMM_d must have depth Omega(log d/log log d). Both these bounds are tight up to constant factors. Our lower bound has the following consequences for multilinear formula complexity. Depth-reduction: A well-known result of Brent (JACM 1974) implies that any formula of size s can be converted to one of size s^{O(1)} and depth O(log s); further, this reduction continues to hold for multilinear formulas. On the other hand, our lower bound implies that any depth-reduction in the multilinear setting cannot reduce the depth to o(log s) without a superpolynomial blow-up in size. Separations from general formulas: Shpilka and Yehudayoff (FnTTCS 2010) asked whether general formulas can be more efficient than multilinear formulas for computing multilinear polynomials. Our result, along with a non-trivial upper bound for IMM_d implied by a result of Gupta, Kamath, Kayal and Saptharishi (SICOMP 2016), shows that for any size s and product-depth Delta = o(log s), general formulas of size s and product-depth Delta cannot be converted to multilinear formulas of size s^{O(1)} and product-depth Delta, when the underlying field has characteristic zero

    Conditional Lower Bounds for Dynamic Geometric Measure Problems

    Get PDF

    Communication with Partial Noiseless Feedback

    Get PDF
    We introduce the notion of one-way communication schemes with partial noiseless feedback. In this setting, Alice wishes to communicate a message to Bob by using a communication scheme that involves sending a sequence of bits over a channel while receiving feedback bits from Bob for delta fraction of the transmissions. An adversary is allowed to corrupt up to a constant fraction of Alice\u27s transmissions, while the feedback is always uncorrupted. Motivated by questions related to coding for interactive communication, we seek to determine the maximum error rate, as a function of 0 <= delta <= 1, such that Alice can send a message to Bob via some protocol with delta fraction of noiseless feedback. The case delta = 1 corresponds to full feedback, in which the result of Berlekamp [\u2764] implies that the maximum tolerable error rate is 1/3, while the case delta = 0 corresponds to no feedback, in which the maximum tolerable error rate is 1/4, achievable by use of a binary error-correcting code. In this work, we show that for any delta in (0,1] and gamma in [0, 1/3), there exists a randomized communication scheme with noiseless delta-feedback, such that the probability of miscommunication is low, as long as no more than a gamma fraction of the rounds are corrupted. Moreover, we show that for any delta in (0, 1] and gamma < f(delta), there exists a deterministic communication scheme with noiseless delta-feedback that always decodes correctly as long as no more than a gamma fraction of rounds are corrupted. Here f is a monotonically increasing, piecewise linear, continuous function with f(0) = 1/4 and f(1) = 1/3. Also, the rate of communication in both cases is constant (dependent on delta and gamma but independent of the input length)

    Conditional Hardness of Earth Mover Distance

    Get PDF
    The Earth Mover Distance (EMD) between two sets of points A, B subseteq R^d with |A| = |B| is the minimum total Euclidean distance of any perfect matching between A and B. One of its generalizations is asymmetric EMD, which is the minimum total Euclidean distance of any matching of size |A| between sets of points A,B subseteq R^d with |A| <= |B|. The problems of computing EMD and asymmetric EMD are well-studied and have many applications in computer science, some of which also ask for the EMD-optimal matching itself. Unfortunately, all known algorithms require at least quadratic time to compute EMD exactly. Approximation algorithms with nearly linear time complexity in n are known (even for finding approximately optimal matchings), but suffer from exponential dependence on the dimension. In this paper we show that significant improvements in exact and approximate algorithms for EMD would contradict conjectures in fine-grained complexity. In particular, we prove the following results: - Under the Orthogonal Vectors Conjecture, there is some c>0 such that EMD in Omega(c^{log^* n}) dimensions cannot be computed in truly subquadratic time. - Under the Hitting Set Conjecture, for every delta>0, no truly subquadratic time algorithm can find a (1 + 1/n^delta)-approximate EMD matching in omega(log n) dimensions. - Under the Hitting Set Conjecture, for every eta = 1/omega(log n), no truly subquadratic time algorithm can find a (1 + eta)-approximate asymmetric EMD matching in omega(log n) dimensions

    Conditional Hardness of Earth Mover Distance

    Get PDF
    The Earth Mover Distance (EMD) between two sets of points A, B subseteq R^d with |A| = |B| is the minimum total Euclidean distance of any perfect matching between A and B. One of its generalizations is asymmetric EMD, which is the minimum total Euclidean distance of any matching of size |A| between sets of points A,B subseteq R^d with |A| <= |B|. The problems of computing EMD and asymmetric EMD are well-studied and have many applications in computer science, some of which also ask for the EMD-optimal matching itself. Unfortunately, all known algorithms require at least quadratic time to compute EMD exactly. Approximation algorithms with nearly linear time complexity in n are known (even for finding approximately optimal matchings), but suffer from exponential dependence on the dimension. In this paper we show that significant improvements in exact and approximate algorithms for EMD would contradict conjectures in fine-grained complexity. In particular, we prove the following results: - Under the Orthogonal Vectors Conjecture, there is some c>0 such that EMD in Omega(c^{log^* n}) dimensions cannot be computed in truly subquadratic time. - Under the Hitting Set Conjecture, for every delta>0, no truly subquadratic time algorithm can find a (1 + 1/n^delta)-approximate EMD matching in omega(log n) dimensions. - Under the Hitting Set Conjecture, for every eta = 1/omega(log n), no truly subquadratic time algorithm can find a (1 + eta)-approximate asymmetric EMD matching in omega(log n) dimensions

    How Fast Can We Play Tetris Greedily With Rectangular Pieces?

    Get PDF
    Consider a variant of Tetris played on a board of width ww and infinite height, where the pieces are axis-aligned rectangles of arbitrary integer dimensions, the pieces can only be moved before letting them drop, and a row does not disappear once it is full. Suppose we want to follow a greedy strategy: let each rectangle fall where it will end up the lowest given the current state of the board. To do so, we want a data structure which can always suggest a greedy move. In other words, we want a data structure which maintains a set of O(n)O(n) rectangles, supports queries which return where to drop the rectangle, and updates which insert a rectangle dropped at a certain position and return the height of the highest point in the updated set of rectangles. We show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on a board of width w=Θ(n)w=\Theta(n), if the OMv conjecture [Henzinger et al., 2015] is true, then both operations cannot be supported in time O(n1/2ϵ)O(n^{1/2-\epsilon}) simultaneously. The reduction also implies polynomial bounds from the 3-SUM conjecture and the APSP conjecture. On the other hand, we show that there is a data structure supporting both operations in O(n1/2log3/2n)O(n^{1/2}\log^{3/2}n) time on boards of width nO(1)n^{O(1)}, matching the lower bound up to a no(1)n^{o(1)} factor.Comment: Correction of typos and other minor correction
    corecore