330 research outputs found
Space--Time Tradeoffs for Subset Sum: An Improved Worst Case Algorithm
The technique of Schroeppel and Shamir (SICOMP, 1981) has long been the most
efficient way to trade space against time for the SUBSET SUM problem. In the
random-instance setting, however, improved tradeoffs exist. In particular, the
recently discovered dissection method of Dinur et al. (CRYPTO 2012) yields a
significantly improved space--time tradeoff curve for instances with strong
randomness properties. Our main result is that these strong randomness
assumptions can be removed, obtaining the same space--time tradeoffs in the
worst case. We also show that for small space usage the dissection algorithm
can be almost fully parallelized. Our strategy for dealing with arbitrary
instances is to instead inject the randomness into the dissection process
itself by working over a carefully selected but random composite modulus, and
to introduce explicit space--time controls into the algorithm by means of a
"bailout mechanism"
Deterministic Time-Space Tradeoffs for k-SUM
Given a set of numbers, the -SUM problem asks for a subset of numbers
that sums to zero. When the numbers are integers, the time and space complexity
of -SUM is generally studied in the word-RAM model; when the numbers are
reals, the complexity is studied in the real-RAM model, and space is measured
by the number of reals held in memory at any point.
We present a time and space efficient deterministic self-reduction for the
-SUM problem which holds for both models, and has many interesting
consequences. To illustrate:
* -SUM is in deterministic time and space
. In general, any
polylogarithmic-time improvement over quadratic time for -SUM can be
converted into an algorithm with an identical time improvement but low space
complexity as well. * -SUM is in deterministic time and space
, derandomizing an algorithm of Wang.
* A popular conjecture states that 3-SUM requires time on the
word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the
(seemingly weaker) conjecture that every -space algorithm for
-SUM requires at least time on the word-RAM.
* For , -SUM is in deterministic time and
space
Alternation-Trading Proofs, Linear Programming, and Lower Bounds
A fertile area of recent research has demonstrated concrete polynomial time
lower bounds for solving natural hard problems on restricted computational
models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path,
Mod6-SAT, Majority-of-Majority-SAT, and Tautologies, to name a few. The proofs
of these lower bounds follow a certain proof-by-contradiction strategy that we
call alternation-trading. An important open problem is to determine how
powerful such proofs can possibly be.
We propose a methodology for studying these proofs that makes them amenable
to both formal analysis and automated theorem proving. We prove that the search
for better lower bounds can often be turned into a problem of solving a large
series of linear programming instances. Implementing a small-scale theorem
prover based on this result, we extract new human-readable time lower bounds
for several problems. This framework can also be used to prove concrete
limitations on the current techniques.Comment: To appear in STACS 2010, 12 page
Faster space-efficient algorithms for Subset Sum, k -Sum, and related problems
We present randomized algorithms that solve subset sum and knapsack instances with n items in O∗ (20.86n) time, where the O∗ (∙ ) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve binary integer programming on n variables with few constraints in a similar running time. We also show that for any constant k ≥ 2, random instances of k-Sum can be solved using O(nk -0.5polylog(n)) time and O(log n) space, without the assumption of random access to random bits.Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(log n) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list.</p
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes
Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials
We consider the problem of representing Boolean functions exactly by "sparse"
linear combinations (over ) of functions from some "simple" class
. In particular, given we are interested in finding
low-complexity functions lacking sparse representations. When is the
set of PARITY functions or the set of conjunctions, this sort of problem has a
well-understood answer, the problem becomes interesting when is
"overcomplete" and the set of functions is not linearly independent. We focus
on the cases where is the set of linear threshold functions, the set
of rectified linear units (ReLUs), and the set of low-degree polynomials over a
finite field, all of which are well-studied in different contexts.
We provide generic tools for proving lower bounds on representations of this
kind. Applying these, we give several new lower bounds for "semi-explicit"
Boolean functions. For example, we show there are functions in nondeterministic
quasi-polynomial time that require super-polynomial size:
Depth-two neural networks with sign activation function, a special
case of depth-two threshold circuit lower bounds.
Depth-two neural networks with ReLU activation function.
-linear combinations of -degree
-polynomials, for every prime (related to problems regarding
Higher-Order "Uncertainty Principles"). We also obtain a function in
requiring linear combinations.
-linear combinations of circuits of
polynomial size (further generalizing the recent lower bounds of Murray and the
author).
(The above is a shortened abstract. For the full abstract, see the paper.
Integer factorization as subset-sum problem
This paper elaborates on a sieving technique that has first been applied in
2018 for improving bounds on deterministic integer factorization. We will
generalize the sieve in order to obtain a polynomial-time reduction from
integer factorization to a specific instance of the multiple-choice subset-sum
problem. As an application, we will improve upon special purpose factorization
algorithms for integers composed of divisors with small difference. In
particular, we will refine the runtime complexity of Fermat's factorization
algorithm by a large subexponential factor. Our first procedure is
deterministic, rigorous, easy to implement and has negligible space complexity.
Our second procedure is heuristically faster than the first, but has
non-negligible space complexity.Comment: 22 pages (including appendix
Time-Space Lower Bounds for Simulating Proof Systems with Quantum and Randomized Verifiers
A line of work initiated by Fortnow in 1997 has proven model-independent
time-space lower bounds for the problem and related problems
within the polynomial-time hierarchy. For example, for the
problem, the state-of-the-art is that the problem cannot be solved by
random-access machines in time and space simultaneously for .
We extend this lower bound approach to the quantum and randomized domains.
Combining Grover's algorithm with components from time-space
lower bounds, we show that there are problems verifiable in time with
quantum Merlin-Arthur protocols that cannot be solved in time and
space simultaneously for , a
super-quadratic time lower bound. This result and the prior work on
can both be viewed as consequences of a more general formula for
time lower bounds against small-space algorithms, whose asymptotics we study in
full.
We also show lower bounds against randomized algorithms: there are problems
verifiable in time with (classical) Merlin-Arthur protocols that cannot
be solved in randomized time and space simultaneously for , improving a result of Diehl. For quantum Merlin-Arthur protocols, the
lower bound in this setting can be improved to .Comment: 38 pages, 5 figures. To appear in ITCS 202
- …