330 research outputs found

    Space--Time Tradeoffs for Subset Sum: An Improved Worst Case Algorithm

    Full text link
    The technique of Schroeppel and Shamir (SICOMP, 1981) has long been the most efficient way to trade space against time for the SUBSET SUM problem. In the random-instance setting, however, improved tradeoffs exist. In particular, the recently discovered dissection method of Dinur et al. (CRYPTO 2012) yields a significantly improved space--time tradeoff curve for instances with strong randomness properties. Our main result is that these strong randomness assumptions can be removed, obtaining the same space--time tradeoffs in the worst case. We also show that for small space usage the dissection algorithm can be almost fully parallelized. Our strategy for dealing with arbitrary instances is to instead inject the randomness into the dissection process itself by working over a carefully selected but random composite modulus, and to introduce explicit space--time controls into the algorithm by means of a "bailout mechanism"

    Deterministic Time-Space Tradeoffs for k-SUM

    Get PDF
    Given a set of numbers, the kk-SUM problem asks for a subset of kk numbers that sums to zero. When the numbers are integers, the time and space complexity of kk-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the kk-SUM problem which holds for both models, and has many interesting consequences. To illustrate: * 33-SUM is in deterministic time O(n2lglg(n)/lg(n))O(n^2 \lg\lg(n)/\lg(n)) and space O(nlg(n)lglg(n))O\left(\sqrt{\frac{n \lg(n)}{\lg\lg(n)}}\right). In general, any polylogarithmic-time improvement over quadratic time for 33-SUM can be converted into an algorithm with an identical time improvement but low space complexity as well. * 33-SUM is in deterministic time O(n2)O(n^2) and space O(n)O(\sqrt n), derandomizing an algorithm of Wang. * A popular conjecture states that 3-SUM requires n2o(1)n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n.51)O(n^{.51})-space algorithm for 33-SUM requires at least n2o(1)n^{2-o(1)} time on the word-RAM. * For k4k \ge 4, kk-SUM is in deterministic O(nk2+2/k)O(n^{k - 2 + 2/k}) time and O(n)O(\sqrt{n}) space

    Alternation-Trading Proofs, Linear Programming, and Lower Bounds

    Get PDF
    A fertile area of recent research has demonstrated concrete polynomial time lower bounds for solving natural hard problems on restricted computational models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path, Mod6-SAT, Majority-of-Majority-SAT, and Tautologies, to name a few. The proofs of these lower bounds follow a certain proof-by-contradiction strategy that we call alternation-trading. An important open problem is to determine how powerful such proofs can possibly be. We propose a methodology for studying these proofs that makes them amenable to both formal analysis and automated theorem proving. We prove that the search for better lower bounds can often be turned into a problem of solving a large series of linear programming instances. Implementing a small-scale theorem prover based on this result, we extract new human-readable time lower bounds for several problems. This framework can also be used to prove concrete limitations on the current techniques.Comment: To appear in STACS 2010, 12 page

    Faster space-efficient algorithms for Subset Sum, k -Sum, and related problems

    Get PDF
    We present randomized algorithms that solve subset sum and knapsack instances with n items in O∗ (20.86n) time, where the O∗ (∙ ) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve binary integer programming on n variables with few constraints in a similar running time. We also show that for any constant k ≥ 2, random instances of k-Sum can be solved using O(nk -0.5polylog(n)) time and O(log n) space, without the assumption of random access to random bits.Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(log n) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list.</p

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes

    Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials

    Get PDF
    We consider the problem of representing Boolean functions exactly by "sparse" linear combinations (over R\mathbb{R}) of functions from some "simple" class C{\cal C}. In particular, given C{\cal C} we are interested in finding low-complexity functions lacking sparse representations. When C{\cal C} is the set of PARITY functions or the set of conjunctions, this sort of problem has a well-understood answer, the problem becomes interesting when C{\cal C} is "overcomplete" and the set of functions is not linearly independent. We focus on the cases where C{\cal C} is the set of linear threshold functions, the set of rectified linear units (ReLUs), and the set of low-degree polynomials over a finite field, all of which are well-studied in different contexts. We provide generic tools for proving lower bounds on representations of this kind. Applying these, we give several new lower bounds for "semi-explicit" Boolean functions. For example, we show there are functions in nondeterministic quasi-polynomial time that require super-polynomial size: \bullet Depth-two neural networks with sign activation function, a special case of depth-two threshold circuit lower bounds. \bullet Depth-two neural networks with ReLU activation function. \bullet R\mathbb{R}-linear combinations of O(1)O(1)-degree Fp\mathbb{F}_p-polynomials, for every prime pp (related to problems regarding Higher-Order "Uncertainty Principles"). We also obtain a function in ENPE^{NP} requiring 2Ω(n)2^{\Omega(n)} linear combinations. \bullet R\mathbb{R}-linear combinations of ACCTHRACC \circ THR circuits of polynomial size (further generalizing the recent lower bounds of Murray and the author). (The above is a shortened abstract. For the full abstract, see the paper.

    Integer factorization as subset-sum problem

    Full text link
    This paper elaborates on a sieving technique that has first been applied in 2018 for improving bounds on deterministic integer factorization. We will generalize the sieve in order to obtain a polynomial-time reduction from integer factorization to a specific instance of the multiple-choice subset-sum problem. As an application, we will improve upon special purpose factorization algorithms for integers composed of divisors with small difference. In particular, we will refine the runtime complexity of Fermat's factorization algorithm by a large subexponential factor. Our first procedure is deterministic, rigorous, easy to implement and has negligible space complexity. Our second procedure is heuristically faster than the first, but has non-negligible space complexity.Comment: 22 pages (including appendix

    Time-Space Lower Bounds for Simulating Proof Systems with Quantum and Randomized Verifiers

    Get PDF
    A line of work initiated by Fortnow in 1997 has proven model-independent time-space lower bounds for the SAT\mathsf{SAT} problem and related problems within the polynomial-time hierarchy. For example, for the SAT\mathsf{SAT} problem, the state-of-the-art is that the problem cannot be solved by random-access machines in ncn^c time and no(1)n^{o(1)} space simultaneously for c<2cos(π7)1.801c < 2\cos(\frac{\pi}{7}) \approx 1.801. We extend this lower bound approach to the quantum and randomized domains. Combining Grover's algorithm with components from SAT\mathsf{SAT} time-space lower bounds, we show that there are problems verifiable in O(n)O(n) time with quantum Merlin-Arthur protocols that cannot be solved in ncn^c time and no(1)n^{o(1)} space simultaneously for c<3+322.366c < \frac{3+\sqrt{3}}{2} \approx 2.366, a super-quadratic time lower bound. This result and the prior work on SAT\mathsf{SAT} can both be viewed as consequences of a more general formula for time lower bounds against small-space algorithms, whose asymptotics we study in full. We also show lower bounds against randomized algorithms: there are problems verifiable in O(n)O(n) time with (classical) Merlin-Arthur protocols that cannot be solved in ncn^c randomized time and no(1)n^{o(1)} space simultaneously for c<1.465c < 1.465, improving a result of Diehl. For quantum Merlin-Arthur protocols, the lower bound in this setting can be improved to c<1.5c < 1.5.Comment: 38 pages, 5 figures. To appear in ITCS 202
    corecore