118 research outputs found

    Near-linear time, Leakage-resilient Key Evolution Schemes from Expander Graphs

    Get PDF
    We develop new schemes for deterministically updating a stored cryptographic key that provide security against an internal adversary who can control the update computation and leak bounded amounts of information to the outside world. Our schemes are much more efficient than the previous schemes for this model, due to Dziembowski, Kazana and Wichs (CRYPTO 2011). Specifically, our update operation runs in time quasilinear in the key length, rather than quadratic, while offering a similar level of leakage resilience. In order to design our scheme, we strengthen the connections between the model of Dziembowski et al. and ``pebbling games\u27\u27, showing that random-oracle-based key evolution schemes are secure as long as the graph of the update function\u27s calls to the oracle has appropriate combinatorial properties. This builds on a connection between pebbling and the random oracle model first established by Dwork, Naor and Wee (CRYPTO 2005). Our scheme\u27s efficiency relies on the existence (which we show) of families of ``local\u27\u27 bipartite expander graphs of constant degree

    Proof of Space from Stacked Expanders

    Get PDF
    Recently, proof of space (PoS) has been suggested as a more egalitarian alternative to the traditional hash-based proof of work. In PoS, a prover proves to a verifier that it has dedicated some specified amount of space. A closely related notion is memory-hard functions (MHF), functions that require a lot of memory/space to compute. While making promising progress, existing PoS and MHF have several problems. First, there are large gaps between the desired space-hardness and what can be proven. Second, it has been pointed out that PoS and MHF should require a lot of space not just at some point, but throughout the entire computation/protocol; few proposals considered this issue. Third, the two existing PoS constructions are both based on a class of graphs called superconcentrators, which are either hard to construct or add a logarithmic factor overhead to efficiency. In this paper, we construct PoS from stacked expander graphs. Our constructions are simpler, more efficient and have tighter provable space-hardness than prior works. Our results also apply to a recent MHF called Balloon hash. We show Balloon hash has tighter space-hardness than previously believed and consistent space-hardness throughout its computation

    Sparse graph codes for compression, sensing, and secrecy

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from student PDF version of thesis.Includes bibliographical references (p. 201-212).Sparse graph codes were first introduced by Gallager over 40 years ago. Over the last two decades, such codes have been the subject of intense research, and capacity approaching sparse graph codes with low complexity encoding and decoding algorithms have been designed for many channels. Motivated by the success of sparse graph codes for channel coding, we explore the use of sparse graph codes for four other problems related to compression, sensing, and security. First, we construct locally encodable and decodable source codes for a simple class of sources. Local encodability refers to the property that when the original source data changes slightly, the compression produced by the source code can be updated easily. Local decodability refers to the property that a single source symbol can be recovered without having to decode the entire source block. Second, we analyze a simple message-passing algorithm for compressed sensing recovery, and show that our algorithm provides a nontrivial f1/f1 guarantee. We also show that very sparse matrices and matrices whose entries must be either 0 or 1 have poor performance with respect to the restricted isometry property for the f2 norm. Third, we analyze the performance of a special class of sparse graph codes, LDPC codes, for the problem of quantizing a uniformly random bit string under Hamming distortion. We show that LDPC codes can come arbitrarily close to the rate-distortion bound using an optimal quantizer. This is a special case of a general result showing a duality between lossy source coding and channel coding-if we ignore computational complexity, then good channel codes are automatically good lossy source codes. We also prove a lower bound on the average degree of vertices in an LDPC code as a function of the gap to the rate-distortion bound. Finally, we construct efficient, capacity-achieving codes for the wiretap channel, a model of communication that allows one to provide information-theoretic, rather than computational, security guarantees. Our main results include the introduction of a new security critertion which is an information-theoretic analog of semantic security, the construction of capacity-achieving codes possessing strong security with nearly linear time encoding and decoding algorithms for any degraded wiretap channel, and the construction of capacity-achieving codes possessing semantic security with linear time encoding and decoding algorithms for erasure wiretap channels. Our analysis relies on a relatively small set of tools. One tool is density evolution, a powerful method for analyzing the behavior of message-passing algorithms on long, random sparse graph codes. Another concept we use extensively is the notion of an expander graph. Expander graphs have powerful properties that allow us to prove adversarial, rather than probabilistic, guarantees for message-passing algorithms. Expander graphs are also useful in the context of the wiretap channel because they provide a method for constructing randomness extractors. Finally, we use several well-known isoperimetric inequalities (Harper's inequality, Azuma's inequality, and the Gaussian Isoperimetric inequality) in our analysis of the duality between lossy source coding and channel coding.by Venkat Bala Chandar.Ph.D

    Proofs of Space: When Space Is of the Essence

    Get PDF
    Proofs of computational effort were devised to control denial of service attacks. Dwork and Naor (CRYPTO ’92), for example, proposed to use such proofs to discourage spam. The idea is to couple each email message with a proof of work that demonstrates the sender performed some computational task. A proof of work can be either CPU-bound or memory-bound. In a CPU-bound proof, the prover must compute a CPU-intensive function that is easy to check by the verifier. A memory-bound proof, instead, forces the prover to access the main memory several times, effectively replacing CPU cycles with memory accesses. In this paper we put forward a new concept dubbed proof of space. To compute such a proof, the prover must use a specified amount of space, i.e., we are not interested in the number of accesses to the main memory (as in memory-bound proof of work) but rather on the amount of actual memory the prover must employ to compute the proof. We give a complete and detailed algorithmic description of our model. We develop a comprehensive theoretical analysis which uses combinatorial tools from Complexity Theory (such as pebbling games) which are essential in studying space lower bounds

    A Survey of Leakage-Resilient Cryptography

    Get PDF
    In the past 15 years, cryptography has made considerable progress in expanding the adversarial attack model to cover side-channel attacks, and has built schemes to provably defend against some of them. This survey covers the main models and results in this so-called leakage-resilient cryptography

    The paradigm of partial erasures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 137-145).This thesis is a study of erasures in cryptographic protocols. Erasing old data and keys is an important capability of honest parties in cryptographic protocols. It is useful in many settings, including proactive security in the presence of a mobile adversary, adaptive security in the presence of an adaptive adversary, forward security, and intrusion resilience. Some of these settings, such as achieving proactive security, is provably impossible without some form of erasures. Other settings, such as designing protocols that are secure against adaptive adversaries, are much simpler to achieve when erasures are allowed. Protocols for all these contexts typically assume the ability to perfectly erase information. Unfortunately, as amply demonstrated in the systems literature, perfect erasures are hard to implement in practice. We propose a model of imperfect or partial erasures where erasure instructions are only partially effective and leave almost all the data intact, thus giving the honest parties only a limited capability to dispose old data. Nonetheless, we show how to design protocols for all of the above settings (including proactive security, adaptive security, forward security, and intrusion resilience) for which this weak form of erasures suffices. We do not have to invent entirely new protocols, but rather show how to automatically modify protocols relying on perfect erasures into ones for which partial erasures suffices. Stated most generally, we provide a compiler that transforms any protocol relying on perfect erasures for security into one with the same functionality that remains secure even if the erasures are only partial. The key idea is a new redundant representation of secret data which can still be computed on, and yet is rendered useless when partially erased. We prove that any such compiler must incur a cost in additional storage, and that our compiler is near optimal in terms of its storage overhead. We also give computationally more efficient compilers for a number of special cases: (1) when all the computations on secrets can be done in constant parallel time (NC⁰); (2) for a class of proactive secret sharing protocols where we leave the protocol intact except for changing the representation of the shares of the secret and the instructions that modify the shares (to correspondingly modify the new representation instead).by Dah-Yoh Lim.Ph.D

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Overview study of Space Power Technologies for the advanced energetics program

    Get PDF
    Space power technologies are reviewed to determine the state-of-the-art and to identify advanced or novel concepts which promise large increases in performance. The potential for incresed performance is judged relative to benchmarks based on technologies which have been flight tested. Space power technology concepts selected for their potentially high performance are prioritized in a list of R & D topical recommendations for the NASA program on Advanced Energetics. The technology categories studied are solar collection, nuclear power sources, energy conversion, energy storage, power transmission, and power processing. The emphasis is on electric power generation in space for satellite on board electric power, for electric propulsion, or for beamed power to spacecraft. Generic mission categories such as low Earth orbit missions and geosynchronous orbit missions are used to distinguish general requirements placed on the performance of power conversion technology. Each space power technology is judged on its own merits without reference to specific missions or power systems. Recommendations include 31 space power concepts which span the entire collection of technology categories studied and represent the critical technologies needed for higher power, lighter weight, more efficient power conversion in space

    Dagstuhl News January - December 2011

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic
    • 

    corecore