147 research outputs found

    A New Distribution-Sensitive Secure Sketch and Popularity-Proportional Hashing

    Get PDF
    Motivated by typo correction in password authentication, we investigate cryptographic error-correction of secrets in settings where the distribution of secrets is a priori (approximately) known. We refer to this as the distribution-sensitive setting. We design a new secure sketch called the layer-hiding hash (LHH) that offers the best security to date. Roughly speaking, we show that LHH saves an additional log H_0(W) bits of entropy compared to the recent layered sketch construction due to Fuller, Reyzin, and Smith (FRS). Here H_0(W) is the size of the support of the distribution W. When supports are large, as with passwords, our new construction offers a substantial security improvement. We provide two new constructions of typo-tolerant password-based authentication schemes. The first combines a LHH or FRS sketch with a standard slow-to-compute hash function, and the second avoids secure sketches entirely, correcting typos instead by checking all nearby passwords. Unlike the previous such brute-force-checking construction, due to Chatterjee et al., our new construction uses a hash function whose run-time is proportional to the popularity of the password (forcing a longer hashing time on more popular, lower entropy passwords). We refer to this as popularity-proportional hashing (PPH). We then introduce a frame-work for comparing different typo-tolerant authentication approaches. We show that PPH always offers a better time / security trade-off than the LHH and FRS constructions, and for certain distributions outperforms the Chatterjee et al. construction. Elsewhere, this latter construction offers the best trade-off. In aggregate our results suggest that the best known secure sketches are still inferior to simpler brute-force based approaches

    On the Correspondence between Display Postulates and Deep Inference in Nested Sequent Calculi for Tense Logics

    Full text link
    We consider two styles of proof calculi for a family of tense logics, presented in a formalism based on nested sequents. A nested sequent can be seen as a tree of traditional single-sided sequents. Our first style of calculi is what we call "shallow calculi", where inference rules are only applied at the root node in a nested sequent. Our shallow calculi are extensions of Kashima's calculus for tense logic and share an essential characteristic with display calculi, namely, the presence of structural rules called "display postulates". Shallow calculi enjoy a simple cut elimination procedure, but are unsuitable for proof search due to the presence of display postulates and other structural rules. The second style of calculi uses deep-inference, whereby inference rules can be applied at any node in a nested sequent. We show that, for a range of extensions of tense logic, the two styles of calculi are equivalent, and there is a natural proof theoretic correspondence between display postulates and deep inference. The deep inference calculi enjoy the subformula property and have no display postulates or other structural rules, making them a better framework for proof search

    Data-Independent Memory Hard Functions: New Attacks and Stronger Constructions

    Get PDF
    Memory-hard functions (MHFs) are a key cryptographic primitive underlying the design of moderately expensive password hashing algorithms and egalitarian proofs of work. Over the past few years several increasingly stringent goals for an MHF have been proposed including the requirement that the MHF have high sequential space-time (ST) complexity, parallel space-time complexity, amortized area-time (aAT) complexity and sustained space complexity. Data-Independent Memory Hard Functions (iMHFs) are of special interest in the context of password hashing as they naturally resist side-channel attacks. iMHFs can be specified using a directed acyclic graph (DAG) GG with N=2nN=2^n nodes and low indegree and the complexity of the iMHF can be analyzed using a pebbling game. Recently, Alwen et al. [CCS 17] constructed a DAG called DRSample that has aAT complexity at least Ω(N2/logN)\Omega\left( N^2/\log N\right). Asymptotically DRSample outperformed all prior iMHF constructions including Argon2i, winner of the password hashing competition (aAT cost O(N1.767)\mathcal{O}\left(N^{1.767}\right)), though the constants in these bounds are poorly understood. We show that the greedy pebbling strategy of Boneh et al. [ASIACRYPT 16] is particularly effective against DRSample e.g., the aAT cost is O(N2/logN)\mathcal{O}\left( N^2/\log N\right). In fact, our empirical analysis {\em reverses} the prior conclusion of Alwen et al. that DRSample provides stronger resistance to known pebbling attacks for practical values of N224N \leq 2^{24}. We construct a new iMHF candidate (DRSample+BRG) by using the bit-reversal graph to extend DRSample. We then prove that the construction is asymptotically optimal under every MHF criteria, and we empirically demonstrate that our iMHF provides the best resistance to {\em known} pebbling attacks. For example, we show that any parallel pebbling attack either has aAT cost ω(N2)\omega(N^2) or requires at least Ω(N)\Omega(N) steps with Ω(N/logN)\Omega(N/\log N) pebbles on the DAG. This makes our construction the first practical iMHF with a strong sustained space-complexity guarantee and immediately implies that any parallel pebbling has aAT complexity Ω(N2/logN)\Omega(N^2/\log N). We also prove that any sequential pebbling (including the greedy pebbling attack) has aAT cost Ω(N2)\Omega\left( N^2\right) and, if a plausible conjecture holds, any parallel pebbling has aAT cost Ω(N2loglogN/logN)\Omega(N^2 \log \log N/\log N) --- the best possible bound for an iMHF

    On the relationship between functional encryption, obfuscation, and fully homomorphic encryption

    Get PDF
    We investigate the relationship between Functional Encryption (FE) and Fully Homomorphic Encryption (FHE), demonstrating that, under certain assumptions, a Functional Encryption scheme supporting evaluation on two ciphertexts implies Fully Homomorphic Encryption. We first introduce the notion of Randomized Functional Encryption (RFE), a generalization of Functional Encryption dealing with randomized functionalities of interest in its own right, and show how to construct an RFE from a (standard) semantically secure FE. For this we define the notion of entropically secure FE and use it as an intermediary step in the construction. Finally we show that RFEs constructed in this way can be used to construct FHE schemes thereby establishing a relation between the FHE and FE primitives. We conclude the paper by recasting the construction of RFE schemes in the context of obfuscation.NSF -National Science Foundatio

    Static-Memory-Hard Functions, and Modeling the Cost of Space vs. Time

    Get PDF
    A series of recent research starting with (Alwen and Serbinenko, STOC 2015) has deepened our understanding of the notion of memory-hardness in cryptography — a useful property of hash functions for deterring large-scale password-cracking attacks — and has shown memory-hardness to have intricate connections with the theory of graph pebbling. Definitions of memory-hardness are not yet unified in the somewhat nascent field of memory-hardness, however, and the guarantees proven to date are with respect to a range of proposed definitions. In this paper, we observe two significant and practical considerations that are not analyzed by existing models of memory-hardness, and propose new models to capture them, accompanied by constructions based on new hard-to-pebble graphs. Our contribution is two-fold, as follows. First, existing measures of memory-hardness only account for dynamic memory usage (i.e., memory read/written at runtime), and do not consider static memory usage (e.g., memory on disk). Among other things, this means that memory requirements considered by prior models are inherently upper-bounded by a hash function’s runtime; in contrast, counting static memory would potentially allow quantification of much larger memory requirements, decoupled from runtime. We propose a new definition of static-memory-hard function (SHF) which takes static memory into account: we model static memory usage by oracle access to a large preprocessed string, which may be considered part of the hash function description. Static memory requirements are complementary to dynamic memory requirements: neither can replace the other, and to deter large-scale password-cracking attacks, a hash function will benefit from being both dynamic memory-hard and static-memory-hard. We give two SHF constructions based on pebbling. To prove static-memory-hardness, we define a new pebble game (“black-magic pebble game”), and new graph constructions with optimal complexity under our proposed measure. Moreover, we provide a prototype implementation of our first SHF construction (which is based on pebbling of a simple “cylinder” graph), providing an initial demonstration of practical feasibility for a limited range of parameter settings. Secondly, existing memory-hardness models implicitly assume that the cost of space and time are more or less on par: they consider only linear ratios between the costs of time and space. We propose a new model to capture nonlinear time-space trade-offs: e.g., how is the adversary impacted when space is quadratically more expensive than time? We prove that nonlinear tradeoffs can in fact cause adversaries to employ different strategies from linear tradeoffs. Finally, as an additional contribution of independent interest, we present an asymptotically tight graph construction that achieves the best possible space complexity up to log log n-factors for an existing memory-hardness measure called cumulative complexity in the sequential pebbling model

    Truancy and Implications in Guidance and Counseling

    Get PDF
    Truancy  is an act of stay away from school without permission. . This study was conducted in SMPN 1 Nan Sabaris to ward students’ truant behavior. The purpose of this study describes the concept of the school, and the behavior which were shown, the causal factor truant behavior and the effort of guidance and counseling teacher in helping students who have truant behavior. This research was conducted under qualitative in term of case studies research. The purposive and snowball sampling techniques were used to take the sample. Thus the data were obtained from student, parents, friends of the sample student and subject teachers and guidance and counseling teacher. Interview, observation and document analysis were used to collect the data were analyzed through data reduction, presentation and conclusion or verification

    Public Evidence from Secret Ballots

    Full text link
    Elections seem simple---aren't they just counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. And they have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, and usable by voters, election officials, and pollworkers. It is thus not surprising that voting is a rich research area spanning theory, applied cryptography, practical systems analysis, usable security, and statistics. Election integrity involves two key concepts: convincing evidence that outcomes are correct and privacy, which amounts to convincing assurance that there is no evidence about how any given person voted. These are obviously in tension. We examine how current systems walk this tightrope.Comment: To appear in E-Vote-Id '1

    Public-Key Encryption Schemes with Auxiliary Inputs

    Get PDF
    7th Theory of Cryptography Conference, TCC 2010, Zurich, Switzerland, February 9-11, 2010. ProceedingsWe construct public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secret key information-theoretically). Our schemes are based on the decisional Diffie-Hellman (DDH) and the Learning with Errors (LWE) problems. As an independent technical contribution, we extend the Goldreich-Levin theorem to provide a hard-core (pseudorandom) value over large fields.National Science Foundation (U.S.) (Grant CCF-0514167)National Science Foundation (U.S.) (Grant CCF-0635297)National Science Foundation (U.S.) (Grant NSF-0729011)Israel Science Foundation (700/08)Chais Family Fellows Progra

    Symmetrically and Asymmetrically Hard Cryptography

    Get PDF
    International audienceThe main efficiency metrics for a cryptographic primitive are its speed, its code size and its memory complexity. For a variety of reasons, many algorithms have been proposed that, instead of optimizing, try to increase one of these hardness forms.We present for the first time a unified framework for describing the hardness of a primitive along any of these three axes: code-hardness, time-hardness and memory-hardness. This unified view allows us to present modular block cipher and sponge constructions which can have any of the three forms of hardness and can be used to build any higher level symmetric primitive: hash function, PRNG, etc. We also formalize a new concept: asymmetric hardness. It creates two classes of users: common users have to compute a function with a certain hardness while users knowing a secret can compute the same function in a far cheaper way. Functions with such an asymmetric hardness can be directly used in both our modular structures, thus constructing any symmetric primitive with an asymmetric hardness. We also propose the first asymmetrically memory-hard function, Diodon.As illustrations of our framework, we introduce Whale and Skipper. Whale is a code-hard hash function which could be used as a key derivation function and Skipper is the first asymmetrically time-hard block cipher

    Saber:module-LWR based key exchange, CPA-secure encryption and CCA-secure KEM

    Get PDF
    © Springer International Publishing AG, part of Springer Nature 2018. In this paper, we introduce Saber, a package of cryptographic primitives whose security relies on the hardness of the Module Learning With Rounding problem (Mod-LWR). We first describe a secure Diffie-Hellman type key exchangeprotocol, which is then transformed into an IND-CPA encryption scheme and finally into an IND-CCA secure key encapsulation mechanism using a post-quantum version of the Fujisaki-Okamoto transform. The design goals of this package were simplicity, efficiency and flexibility resulting in the following choices: all integer moduli are powers of 2 avoiding modular reduction and rejection sampling entirely; the use of LWR halves the amount of randomness required compared to LWE-based schemes and reduces bandwidth; the module structure provides flexibility by reusing one core component for multiple security levels. A constant-time AVX2 optimized software implementation of the KEM with parameters providing more than 128 bits of post-quantum security, requires only 101K, 125K and 129K cycles for key generation, encapsulation and decapsulation respectively on a Dell laptop with an Intel i7-Haswell processor
    corecore