1,289 research outputs found
Hardness Amplification of Optimization Problems
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.
We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows:
If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27.
As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium
Cryptography from Information Loss
© Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest
New Query Lower Bounds for Submodular Function Minimization
We consider submodular function minimization in the oracle model: given
black-box access to a submodular set function , find an element of using as few queries to
as possible. State-of-the-art algorithms succeed with
queries [LeeSW15], yet the best-known lower bound has never
been improved beyond [Harvey08].
We provide a query lower bound of for submodular function minimization,
a query lower bound for the non-trivial minimizer of a symmetric
submodular function, and a query lower bound for the non-trivial
minimizer of an asymmetric submodular function.
Our lower bound results from a connection between SFM lower bounds
and a novel concept we term the cut dimension of a graph. Interestingly, this
yields a cut-query lower bound for finding the global mincut in an
undirected, weighted graph, but we also prove it cannot yield a lower bound
better than for - mincut, even in a directed, weighted graph
Optimal Single-Choice Prophet Inequalities from Samples
We study the single-choice Prophet Inequality problem when the gambler is
given access to samples. We show that the optimal competitive ratio of
can be achieved with a single sample from each distribution. When the
distributions are identical, we show that for any constant ,
samples from the distribution suffice to achieve the optimal competitive
ratio () within , resolving an open problem of
Correa, D\"utting, Fischer, and Schewior.Comment: Appears in Innovations in Theoretical Computer Science (ITCS) 202
Recommended from our members
Aversion and attraction to harmful plant secondary compounds jointly shape the foraging ecology of a specialist herbivore.
Most herbivorous insect species are restricted to a narrow taxonomic range of host plant species. Herbivore species that feed on mustard plants and their relatives in the Brassicales have evolved highly efficient detoxification mechanisms that actually prevent toxic mustard oils from forming in the bodies of the animals. However, these mechanisms likely were not present during the initial stages of specialization on mustard plants ~100 million years ago. The herbivorous fly Scaptomyza nigrita (Drosophilidae) is a specialist on a single mustard species, bittercress (Cardamine cordifolia; Brassicaceae) and is in a fly lineage that evolved to feed on mustards only in the past 10-20 million years. In contrast to many mustard specialists, S. nigrita does not prevent formation of toxic breakdown products (mustard oils) arising from glucosinolates (GLS), the primary defensive compounds in mustard plants. Therefore, it is an appealing model for dissecting the early stages of host specialization. Because mustard oils actually form in the bodies of S. nigrita, we hypothesized that in lieu of a specialized detoxification mechanism, S. nigrita may mitigate exposure to high GLS levels within plant tissues using behavioral avoidance. Here, we report that jasmonic acid (JA) treatment increased GLS biosynthesis in bittercress, repelled adult female flies, and reduced larval growth. S. nigrita larval damage also induced foliar GLS, especially in apical leaves, which correspondingly displayed the least S. nigrita damage in controlled feeding trials and field surveys. Paradoxically, flies preferred to feed and oviposit on GLS-producing Arabidopsis thaliana despite larvae performing worse in these plants versus non-GLS-producing mutants. GLS may be feeding cues for S. nigrita despite their deterrent and defensive properties, which underscores the diverse relationship a mustard specialist has with its host when lacking a specialized means of mustard oil detoxification
Recommended from our members
The Random-Query Model and the Memory-Bounded Coupon Collector
We study a new model of space-bounded computation, the random-query model. The model is based on a branching-program over input variables x_1,…,x_n. In each time step, the branching program gets as an input a random index i ∈ {1,…,n}, together with the input variable x_i (rather than querying an input variable of its choice, as in the case of a standard (oblivious) branching program). We motivate the new model in various ways and study time-space tradeoff lower bounds in this model. Our main technical result is a quadratic time-space lower bound for zero-error computations in the random-query model, for XOR, Majority and many other functions. More precisely, a zero-error computation is a computation that stops with high probability and such that conditioning on the event that the computation stopped, the output is correct with probability 1. We prove that for any Boolean function f: {0,1}^n → {0,1}, with sensitivity k, any zero-error computation with time T and space S, satisfies T ⋅ (S+log n) ≥ Ω(n⋅k). We note that the best time-space lower bounds for standard oblivious branching programs are only slightly super linear and improving these bounds is an important long-standing open problem. To prove our results, we study a memory-bounded variant of the coupon-collector problem that seems to us of independent interest and to the best of our knowledge has not been studied before. We consider a zero-error version of the coupon-collector problem. In this problem, the coupon-collector could explicitly choose to stop when he/she is sure with zero-error that all coupons have already been collected. We prove that any zero-error coupon-collector that stops with high probability in time T, and uses space S, satisfies T⋅(S+log n) ≥ Ω(n^2), where n is the number of different coupons
Local-To-Global Agreement Expansion via the Variance Method
Agreement expansion is concerned with set systems for which local assignments to the sets with almost perfect pairwise consistency (i.e., most overlapping pairs of sets agree on their intersections) implies the existence of a global assignment to the ground set (from which the sets are defined) that agrees with most of the local assignments.
It is currently known that if a set system forms a two-sided or a partite high dimensional expander then agreement expansion is implied. However, it was not known whether agreement expansion can be implied for one-sided high dimensional expanders.
In this work we show that agreement expansion can be deduced for one-sided high dimensional expanders assuming that all the vertices\u27 links (i.e., the neighborhoods of the vertices) are agreement expanders. Thus, for one-sided high dimensional expander, an agreement expansion of the large complicated complex can be deduced from agreement expansion of its small simple links.
Using our result, we settle the open question whether the well studied Ramanujan complexes are agreement expanders. These complexes are neither partite nor two-sided high dimensional expanders. However, they are one-sided high dimensional expanders for which their links are partite and hence are agreement expanders. Thus, our result implies that Ramanujan complexes are agreement expanders, answering affirmatively the aforementioned open question.
The local-to-global agreement expansion that we prove is based on the variance method that we develop. We show that for a high dimensional expander, if we define a function on its top faces and consider its local averages over the links then the variance of these local averages is much smaller than the global variance of the original function. This decreasing in the variance enables us to construct one global agreement function that ties together all local agreement functions
- …