1,481 research outputs found

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    Interactive Coding with Constant Round and Communication Blowup

    Get PDF

    Algorithms and Lower Bounds for Cycles and Walks: Small Space and Sparse Graphs

    Get PDF

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    New Query Lower Bounds for Submodular Function Minimization

    Get PDF
    We consider submodular function minimization in the oracle model: given black-box access to a submodular set function f:2[n]Rf:2^{[n]}\rightarrow \mathbb{R}, find an element of argminS{f(S)}\arg\min_S \{f(S)\} using as few queries to f()f(\cdot) as possible. State-of-the-art algorithms succeed with O~(n2)\tilde{O}(n^2) queries [LeeSW15], yet the best-known lower bound has never been improved beyond nn [Harvey08]. We provide a query lower bound of 2n2n for submodular function minimization, a 3n/223n/2-2 query lower bound for the non-trivial minimizer of a symmetric submodular function, and a (n2)\binom{n}{2} query lower bound for the non-trivial minimizer of an asymmetric submodular function. Our 3n/223n/2-2 lower bound results from a connection between SFM lower bounds and a novel concept we term the cut dimension of a graph. Interestingly, this yields a 3n/223n/2-2 cut-query lower bound for finding the global mincut in an undirected, weighted graph, but we also prove it cannot yield a lower bound better than n+1n+1 for ss-tt mincut, even in a directed, weighted graph

    Anatomy and computational modeling of networks underlying cognitive-emotional interaction

    Get PDF
    The classical dichotomy between cognition and emotion equated the first with rationality or logic and the second with irrational behaviors. The idea that cognition and emotion are separable, antagonistic forces competing for dominance of mind has been hard to displace despite abundant evidence to the contrary. For instance, it is now known that a pathological absence of emotion leads to profound impairment of decision making. Behavioral observations of this kind are corroborated at the mechanistic level: neuroanatomical studies reveal that brain areas typically described as underlying either cognitive or emotional processes are linked in ways that imply complex interactions that do not resemble a simple mutual antagonism. Instead, physiological studies and network simulations suggest that top-down signals from prefrontal cortex realize "cognitive control" in part by either suppressing or promoting emotional responses controlled by the amygdala, in a way that facilitates adaptation to changing task demands. Behavioral, anatomical, and physiological data suggest that emotion and cognition are equal partners in enabling a continuum or matrix of flexible behaviors that are subserved by multiple brain regions acting in concert. Here we focus on neuroanatomical data that highlight circuitry that structures cognitive-emotional interactions by directly or indirectly linking prefrontal areas with the amygdala. We also present an initial computational circuit model, based on anatomical, physiological, and behavioral data to explicitly frame the learning and performance mechanisms by which cognition and emotion interact to achieve flexible behavior.R01 MH057414 - NIMH NIH HHS; R01 NS024760 - NINDS NIH HH

    On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?

    Get PDF

    Optimal Single-Choice Prophet Inequalities from Samples

    Get PDF
    We study the single-choice Prophet Inequality problem when the gambler is given access to samples. We show that the optimal competitive ratio of 1/21/2 can be achieved with a single sample from each distribution. When the distributions are identical, we show that for any constant ε>0\varepsilon > 0, O(n)O(n) samples from the distribution suffice to achieve the optimal competitive ratio (0.745\approx 0.745) within (1+ε)(1+\varepsilon), resolving an open problem of Correa, D\"utting, Fischer, and Schewior.Comment: Appears in Innovations in Theoretical Computer Science (ITCS) 202

    Local-To-Global Agreement Expansion via the Variance Method

    Get PDF
    Agreement expansion is concerned with set systems for which local assignments to the sets with almost perfect pairwise consistency (i.e., most overlapping pairs of sets agree on their intersections) implies the existence of a global assignment to the ground set (from which the sets are defined) that agrees with most of the local assignments. It is currently known that if a set system forms a two-sided or a partite high dimensional expander then agreement expansion is implied. However, it was not known whether agreement expansion can be implied for one-sided high dimensional expanders. In this work we show that agreement expansion can be deduced for one-sided high dimensional expanders assuming that all the vertices\u27 links (i.e., the neighborhoods of the vertices) are agreement expanders. Thus, for one-sided high dimensional expander, an agreement expansion of the large complicated complex can be deduced from agreement expansion of its small simple links. Using our result, we settle the open question whether the well studied Ramanujan complexes are agreement expanders. These complexes are neither partite nor two-sided high dimensional expanders. However, they are one-sided high dimensional expanders for which their links are partite and hence are agreement expanders. Thus, our result implies that Ramanujan complexes are agreement expanders, answering affirmatively the aforementioned open question. The local-to-global agreement expansion that we prove is based on the variance method that we develop. We show that for a high dimensional expander, if we define a function on its top faces and consider its local averages over the links then the variance of these local averages is much smaller than the global variance of the original function. This decreasing in the variance enables us to construct one global agreement function that ties together all local agreement functions
    corecore