64 research outputs found
NP-hardness of circuit minimization for multi-output functions
Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive.
In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators.
Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions
Derandomizing from Random Strings
In this paper we show that BPP is truth-table reducible to the set of
Kolmogorov random strings R_K. It was previously known that PSPACE, and hence
BPP is Turing-reducible to R_K. The earlier proof relied on the adaptivity of
the Turing-reduction to find a Kolmogorov-random string of polynomial length
using the set R_K as oracle. Our new non-adaptive result relies on a new
fundamental fact about the set R_K, namely each initial segment of the
characteristic sequence of R_K is not compressible by recursive means. As a
partial converse to our claim we show that strings of high
Kolmogorov-complexity when used as advice are not much more useful than
randomly chosen strings
Smoothed analysis of deterministic discounted and mean-payoff games
We devise a policy-iteration algorithm for deterministic two-player
discounted and mean-payoff games, that runs in polynomial time with high
probability, on any input where each payoff is chosen independently from a
sufficiently random distribution.
This includes the case where an arbitrary set of payoffs has been perturbed
by a Gaussian, showing for the first time that deterministic two-player games
can be solved efficiently, in the sense of smoothed analysis.
More generally, we devise a condition number for deterministic discounted and
mean-payoff games, and show that our algorithm runs in time polynomial in this
condition number.
Our result confirms a previous conjecture of Boros et al., which was claimed
as a theorem and later retracted. It stands in contrast with a recent
counter-example by Christ and Yannakakis, showing that Howard's
policy-iteration algorithm does not run in smoothed polynomial time on
stochastic single-player mean-payoff games.
Our approach is inspired by the analysis of random optimal assignment
instances by Frieze and Sorkin, and the analysis of bias-induced policies for
mean-payoff games by Akian, Gaubert and Hochart
A tese de Church–Turing
De acordo com a tese de Church–Turing, se um cálculo puder serfeito de forma automatizada — por um dado mĂ©todo, num nĂşmero finito depassos — entĂŁo tambĂ©m pode ser feito por uma máquina de Turing. Nesteartigo faz-se uma breve introdução Ă tese de Church–Turing e ao contextohistĂłrico da sua formulação. Inclui-se uma tradução comentada de partedo artigo de Alan Turing de 1936–37, “On computable numbers, with anapplication to the Entscheidungsproblem” [24], onde Ă© possĂvel entender aorigem da máquina de Turing
Lower Bounds for Elimination via Weak Regularity
We consider the problem of elimination in communication complexity, that was first raised by Ambainis et al. and later studied by Beimel et al. for its connection to the famous direct sum question. In this problem, let f: {0,1}^2n -> {0,1} be any boolean function. Alice and Bob get k inputs x_1, ..., x_k and y_1, ..., y_k respectively, with x_i,y_i in {0,1}^n. They want to output a k-bit vector v, such that there exists one index i for which v_i is not equal f(x_i,y_i). We prove a general result lower bounding the randomized communication complexity of the elimination problem for f using its discrepancy. Consequently, we obtain strong lower bounds for the functions Inner-Product and Greater-Than, that work for exponentially larger values of k than the best previous bounds.
To prove our result, we use a pseudo-random notion called regularity that was first used by Raz and Wigderson. We show that functions with small discrepancy are regular. We also observe that a weaker notion, that we call weak-regularity, already implies hardness of elimination. Finally, we give a different proof, borrowing ideas from Viola, to show that Greater-Than is weakly regular
- …