49 research outputs found

    Computationally Data-Independent Memory Hard Functions

    Get PDF
    Memory hard functions (MHFs) are an important cryptographic primitive that are used to design egalitarian proofs of work and in the construction of moderately expensive key-derivation functions resistant to brute-force attacks. Broadly speaking, MHFs can be divided into two categories: data-dependent memory hard functions (dMHFs) and data-independent memory hard functions (iMHFs). iMHFs are resistant to certain side-channel attacks as the memory access pattern induced by the honest evaluation algorithm is independent of the potentially sensitive input e.g., password. While dMHFs are potentially vulnerable to side-channel attacks (the induced memory access pattern might leak useful information to a brute-force attacker), they can achieve higher cumulative memory complexity (CMC) in comparison than an iMHF. In particular, any iMHF that can be evaluated in N steps on a sequential machine has CMC at most ?((N^2 log log N)/log N). By contrast, the dMHF scrypt achieves maximal CMC ?(N^2) - though the CMC of scrypt would be reduced to just ?(N) after a side-channel attack. In this paper, we introduce the notion of computationally data-independent memory hard functions (ciMHFs). Intuitively, we require that memory access pattern induced by the (randomized) ciMHF evaluation algorithm appears to be independent from the standpoint of a computationally bounded eavesdropping attacker - even if the attacker selects the initial input. We then ask whether it is possible to circumvent known upper bound for iMHFs and build a ciMHF with CMC ?(N^2). Surprisingly, we answer the question in the affirmative when the ciMHF evaluation algorithm is executed on a two-tiered memory architecture (RAM/Cache). We introduce the notion of a k-restricted dynamic graph to quantify the continuum between unrestricted dMHFs (k=n) and iMHFs (k=1). For any ? > 0 we show how to construct a k-restricted dynamic graph with k=?(N^(1-?)) that provably achieves maximum cumulative pebbling cost ?(N^2). We can use k-restricted dynamic graphs to build a ciMHF provided that cache is large enough to hold k hash outputs and the dynamic graph satisfies a certain property that we call "amenable to shuffling". In particular, we prove that the induced memory access pattern is indistinguishable to a polynomial time attacker who can monitor the locations of read/write requests to RAM, but not cache. We also show that when k=o(N^(1/log log N))then any k-restricted graph with constant indegree has cumulative pebbling cost o(N^2). Our results almost completely characterize the spectrum of k-restricted dynamic graphs

    Approximating Cumulative Pebbling Cost Is Unique Games Hard

    Get PDF
    The cumulative pebbling complexity of a directed acyclic graph GG is defined as cc(G)=min⁥P∑i∣Pi∣\mathsf{cc}(G) = \min_P \sum_i |P_i|, where the minimum is taken over all legal (parallel) black pebblings of GG and ∣Pi∣|P_i| denotes the number of pebbles on the graph during round ii. Intuitively, cc(G)\mathsf{cc}(G) captures the amortized Space-Time complexity of pebbling mm copies of GG in parallel. The cumulative pebbling complexity of a graph GG is of particular interest in the field of cryptography as cc(G)\mathsf{cc}(G) is tightly related to the amortized Area-Time complexity of the Data-Independent Memory-Hard Function (iMHF) fG,Hf_{G,H} [AS15] defined using a constant indegree directed acyclic graph (DAG) GG and a random oracle H(⋅)H(\cdot). A secure iMHF should have amortized Space-Time complexity as high as possible, e.g., to deter brute-force password attacker who wants to find xx such that fG,H(x)=hf_{G,H}(x) = h. Thus, to analyze the (in)security of a candidate iMHF fG,Hf_{G,H}, it is crucial to estimate the value cc(G)\mathsf{cc}(G) but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is NP\mathsf{NP}-Hard to compute cc(G)\mathsf{cc}(G), but their techniques do not even rule out an efficient (1+Δ)(1+\varepsilon)-approximation algorithm for any constant Δ>0\varepsilon>0. We show that for any constant c>0c > 0, it is Unique Games hard to approximate cc(G)\mathsf{cc}(G) to within a factor of cc. (See the paper for the full abstract.)Comment: 28 pages, updated figures and corrected typo

    The Power of Negative Reasoning

    Get PDF
    Semialgebraic proof systems have been studied extensively in proof complexity since the late 1990s to understand the power of Gröbner basis computations, linear and semidefinite programming hierarchies, and other methods. Such proof systems are defined alternately with only the original variables of the problem and with special formal variables for positive and negative literals, but there seems to have been no study how these different definitions affect the power of the proof systems. We show for Nullstellensatz, polynomial calculus, Sherali-Adams, and sums-of-squares that adding formal variables for negative literals makes the proof systems exponentially stronger, with respect to the number of terms in the proofs. These separations are witnessed by CNF formulas that are easy for resolution, which establishes that polynomial calculus, Sherali-Adams, and sums-of-squares cannot efficiently simulate resolution without having access to variables for negative literals

    Depth-Robust Graphs and Their Cumulative Memory Complexity

    Get PDF
    Data-independent Memory Hard Functions (iMHFS) are finding a growing number of applications in security; especially in the domain of password hashing. An important property of a concrete iMHF is specified by fixing a directed acyclic graph (DAG) GnG_n on nn nodes. The quality of that iMHF is then captured by the following two pebbling complexities of GnG_n: \begin{itemize} \item The parallel cumulative pebbling complexity Πcc∄(Gn)\Pi^{\parallel}_{cc}(G_n) must be as high as possible (to ensure that the amortized cost of computing the function on dedicated hardware is dominated by the cost of memory). \item The sequential space-time pebbling complexity Πst(Gn)\Pi_{st}(G_n) should be as close as possible to Πcc∄(Gn)\Pi^{\parallel}_{cc}(G_n) (to ensure that using many cores in parallel and amortizing over many instances does not give much of an advantage). \end{itemize} In this paper we construct a family of DAGs with best possible parameters in an asymptotic sense, i.e., where Πcc∄(Gn)=Ω(n2/log⁥(n))\Pi^{\parallel}_{cc}(G_n)=\Omega(n^2/\log(n)) (which matches a known upper bound) and Πst(Gn)\Pi_{st}(G_n) is within a constant factor of Πcc∄(Gn)\Pi^{\parallel}_{cc}(G_n). Our analysis relies on a new connection between the pebbling complexity of a DAG and its depth-robustness (DR) -- a well studied combinatorial property. We show that high DR is {\em sufficient} for high Πcc∄\Pi^{\parallel}_{cc}. Alwen and Blocki (CRYPTO\u2716) showed that high DR is {\em necessary} and so, together, these results fully characterize DAGs with high Πcc∄\Pi^{\parallel}_{cc} in terms of DR. Complementing these results, we provide new upper and lower bounds on the Πcc∄\Pi^{\parallel}_{cc} of several important candidate iMHFs from the literature. We give the first lower bounds on the memory hardness of the Catena and Balloon Hashing functions in a parallel model of computation and we give the first lower bounds of any kind for (a version) of Argon2i. Finally we describe a new class of pebbling attacks improving on those of Alwen and Blocki (CRYPTO\u2716). By instantiating these attacks we upperbound the Πcc∄\Pi^{\parallel}_{cc} of the Password Hashing Competition winner Argon2i and one of the Balloon Hashing functions by O(n1.71)O\left(n^{1.71} \right). We also show an upper bound of O(n1.625)O(n^{1.625}) for the Catena functions and the two remaining Balloon Hashing functions

    Static-Memory-Hard Functions, and Modeling the Cost of Space vs. Time

    Get PDF
    A series of recent research starting with (Alwen and Serbinenko, STOC 2015) has deepened our understanding of the notion of memory-hardness in cryptography — a useful property of hash functions for deterring large-scale password-cracking attacks — and has shown memory-hardness to have intricate connections with the theory of graph pebbling. Definitions of memory-hardness are not yet unified in the somewhat nascent field of memory-hardness, however, and the guarantees proven to date are with respect to a range of proposed definitions. In this paper, we observe two significant and practical considerations that are not analyzed by existing models of memory-hardness, and propose new models to capture them, accompanied by constructions based on new hard-to-pebble graphs. Our contribution is two-fold, as follows. First, existing measures of memory-hardness only account for dynamic memory usage (i.e., memory read/written at runtime), and do not consider static memory usage (e.g., memory on disk). Among other things, this means that memory requirements considered by prior models are inherently upper-bounded by a hash function’s runtime; in contrast, counting static memory would potentially allow quantification of much larger memory requirements, decoupled from runtime. We propose a new definition of static-memory-hard function (SHF) which takes static memory into account: we model static memory usage by oracle access to a large preprocessed string, which may be considered part of the hash function description. Static memory requirements are complementary to dynamic memory requirements: neither can replace the other, and to deter large-scale password-cracking attacks, a hash function will benefit from being both dynamic memory-hard and static-memory-hard. We give two SHF constructions based on pebbling. To prove static-memory-hardness, we define a new pebble game (“black-magic pebble game”), and new graph constructions with optimal complexity under our proposed measure. Moreover, we provide a prototype implementation of our first SHF construction (which is based on pebbling of a simple “cylinder” graph), providing an initial demonstration of practical feasibility for a limited range of parameter settings. Secondly, existing memory-hardness models implicitly assume that the cost of space and time are more or less on par: they consider only linear ratios between the costs of time and space. We propose a new model to capture nonlinear time-space trade-offs: e.g., how is the adversary impacted when space is quadratically more expensive than time? We prove that nonlinear tradeoffs can in fact cause adversaries to employ different strategies from linear tradeoffs. Finally, as an additional contribution of independent interest, we present an asymptotically tight graph construction that achieves the best possible space complexity up to log log n-factors for an existing memory-hardness measure called cumulative complexity in the sequential pebbling model

    On Locally Decodable Codes in Resource Bounded Channels

    Get PDF
    Constructions of locally decodable codes (LDCs) have one of two undesirable properties: low rate or high locality (polynomial in the length of the message). In settings where the encoder/decoder have already exchanged cryptographic keys and the channel is a probabilistic polynomial time (PPT) algorithm, it is possible to circumvent these barriers and design LDCs with constant rate and small locality. However, the assumption that the encoder/decoder have exchanged cryptographic keys is often prohibitive. We thus consider the problem of designing explicit and efficient LDCs in settings where the channel is slightly more constrained than the encoder/decoder with respect to some resource e.g., space or (sequential) time. Given an explicit function f that the channel cannot compute, we show how the encoder can transmit a random secret key to the local decoder using f(?) and a random oracle ?(?). We then bootstrap the private key LDC construction of Ostrovsky, Pandey and Sahai (ICALP, 2007), thereby answering an open question posed by Guruswami and Smith (FOCS 2010) of whether such bootstrapping techniques are applicable to LDCs in channel models weaker than just PPT algorithms. Specifically, in the random oracle model we show how to construct explicit constant rate LDCs with locality of polylog in the security parameter against various resource constrained channels

    LIPIcs

    Get PDF
    Proofs of space (PoS) [Dziembowski et al., CRYPTO'15] are proof systems where a prover can convince a verifier that he "wastes" disk space. PoS were introduced as a more ecological and economical replacement for proofs of work which are currently used to secure blockchains like Bitcoin. In this work we investigate extensions of PoS which allow the prover to embed useful data into the dedicated space, which later can be recovered. Our first contribution is a security proof for the original PoS from CRYPTO'15 in the random oracle model (the original proof only applied to a restricted class of adversaries which can store a subset of the data an honest prover would store). When this PoS is instantiated with recent constructions of maximally depth robust graphs, our proof implies basically optimal security. As a second contribution we show three different extensions of this PoS where useful data can be embedded into the space required by the prover. Our security proof for the PoS extends (non-trivially) to these constructions. We discuss how some of these variants can be used as proofs of catalytic space (PoCS), a notion we put forward in this work, and which basically is a PoS where most of the space required by the prover can be used to backup useful data. Finally we discuss how one of the extensions is a candidate construction for a proof of replication (PoR), a proof system recently suggested in the Filecoin whitepaper

    Practical Graphs for Optimal Side-Channel Resistant Memory-Hard Functions

    Get PDF
    A memory-hard function (MHF) fnf_n with parameter nn can be computed in sequential time and space nn. Simultaneously, a high amortized parallel area-time complexity (aAT) is incurred per evaluation. In practice, MHFs are used to limit the rate at which an adversary (using a custom computational device) can evaluate a security sensitive function that still occasionally needs to be evaluated by honest users (using an off-the-shelf general purpose device). The most prevalent examples of such sensitive functions are Key Derivation Functions (KDFs) and password hashing algorithms where rate limits help mitigate off-line dictionary attacks. As the honest users\u27 inputs to these functions are often (low-entropy) passwords special attention is given to a class of side-channel resistant MHFs called iMHFs. Essentially all iMHFs can be viewed as some mode of operation (making nn calls to some round function) given by a directed acyclic graph (DAG) with very low indegree. Recently, a combinatorial property of a DAG has been identified (called ``depth-robustness\u27\u27) which results in good provable security for an iMHF based on that DAG. Depth-robust DAGs have also proven useful in other cryptographic applications. Unfortunately, up till now, all known very depth-robust DAGs are impractically complicated and little is known about their exact (i.e. non-asymptotic) depth-robustness both in theory and in practice. In this work we build and analyze (both formally and empirically) several exceedingly simple and efficient to navigate practical DAGs for use in iMHFs and other applications. For each DAG we: - Prove that their depth-robustness is asymptotically maximal. - Prove bounds of at least 33 orders of magnitude better on their exact depth-robustness compared to known bounds for other practical iMHF. - Implement and empirically evaluate their depth-robustness and aAT against a variety of state-of-the art (and several new) depth-reduction and low aAT attacks. We find that, against all attacks, the new DAGs perform significantly better in practice than Argon2i, the most widely deployed iMHF in practice. Along the way we also improve the best known empirical attacks on the aAT of Argon2i by implementing and testing several heuristic versions of a (hitherto purely theoretical) depth-reduction attack. Finally, we demonstrate practicality of our constructions by modifying the Argon2i code base to use one of the new high aAT DAGs. Experimental benchmarks on a standard off-the-shelf CPU show that the new modifications do not adversely affect the impressive throughput of Argon2i (despite seemingly enjoying significantly higher aAT)
    corecore