29 research outputs found
Average-Case Fine-Grained Hardness
We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL \u2794), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure.
We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems -- namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) -- in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require time to compute on average, and that of APSP gives us a function that requires time.
Using the same techniques we also obtain a conditional average-case time hierarchy of functions.
Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO \u2703)
Distributed PCP Theorems for Hardness of Approximation in P
We present a new distributed model of probabilistically checkable proofs
(PCP). A satisfying assignment to a CNF formula is
shared between two parties, where Alice knows , Bob knows
, and both parties know . The goal is to have
Alice and Bob jointly write a PCP that satisfies , while
exchanging little or no information. Unfortunately, this model as-is does not
allow for nontrivial query complexity. Instead, we focus on a non-deterministic
variant, where the players are helped by Merlin, a third party who knows all of
.
Using our framework, we obtain, for the first time, PCP-like reductions from
the Strong Exponential Time Hypothesis (SETH) to approximation problems in P.
In particular, under SETH we show that there are no truly-subquadratic
approximation algorithms for Bichromatic Maximum Inner Product over
{0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate
Regular Expression Matching, and Diameter in Product Metric. All our
inapproximability factors are nearly-tight. In particular, for the first two
problems we obtain nearly-polynomial factors of ; only
-factor lower bounds (under SETH) were known before
Blockchain moderated by empty blocks to reduce the energetic impact of crypto-moneys
While cryptocurrencies and blockchain applications continue to gain
popularity, their energy cost is evidently becoming unsustainable. In most
instances, the main cost comes from the required amount of energy for the
Proof-of-Work, and this cost is inherent to the design. In addition, useless
costs from discarded work (e.g., the so-called Forks) and lack of scalability
(in number of users and in rapid transactions) limit their practical
effectiveness.
In this paper, we present an innovative scheme which eliminates the nonce and
thus the burden of the Proof-of-Work which is the main cause of the energy
waste in cryptocurrencies such as Bitcoin. We prove that our scheme guarantees
a tunable and bounded average number of simultaneous mining whatever the size
of the population in competition, thus by making the use of nonce-based
techniques unnecessary, achieves scalability without the cost of consuming a
large volume of energy. The technique used in the proof of our scheme is based
on the analogy of the analysis of a green leader election. The additional
difference with Proof-of-Work schemes (beyond the suppression of the nonce
field that is triggering most of the waste), is the introduction of (what we
denote as) "empty blocks" which aim are to call regular blocks following a
staircase set of values. Our scheme reduces the risk of Forks and provides
tunable scalability for the number of users and the speed of block generation.
We also prove using game theoretical analysis that our scheme is resilient to
unfair competitive investments (e.g., "51 percent" attack) and block nursing.Comment: preliminary version appeared in CryBlock'2019, The IEEE 2nd Workshop
on Cryptocurrencies and Blockchains for Distributed Systems (co-located with
INFOCOM 2019), April 29th, 2019. Paris, France. Green Mining: toward a less
energetic impact of cryptocurrencies, P. Jacquet and B. Mans, IEEE Press, 6
page
On the Hardness of Average-Case k-SUM
In this work, we show the first worst-case to average-case reduction for the classical k-SUM problem. A k-SUM instance is a collection of m integers, and the goal of the k-SUM problem is to find a subset of k integers that sums to 0. In the average-case version, the m elements are chosen uniformly at random from some interval [-u,u].
We consider the total setting where m is sufficiently large (with respect to u and k), so that we are guaranteed (with high probability) that solutions must exist. In particular, m = u^{?(1/k)} suffices for totality. Much of the appeal of k-SUM, in particular connections to problems in computational geometry, extends to the total setting.
The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a running time of u^{?(1/log k)} when m = u^{?(1/log k)}. This beats the known (conditional) lower bounds for worst-case k-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case k-SUM on m elements in time u^{o(1/log k)} will give a super-polynomial improvement in the complexity of algorithms for lattice problems