67 research outputs found

    Tailored Key Encryption (TaKE) Tailoring a key for a given pair of plaintext/ciphertext

    Get PDF
    Abstract. The prevailing cryptographies are attacked on the basis of the fact that only a single element in the key space will match a plausible plaintext with a given ciphertext. Any cryptography that would violate this unique-key assumption, will achieve added security through deniability (akin to One Time Pad). Such cryptography is being described. It is achieved by breaking away from the prevailing notion that the key is a binary string of a fixed length. The described key is random-size non-linear array: a graph constructed from vertices and edges. The binary naming of the vertices and edges, and the configuration are all part of the key. Such keys can take-on most of the necessary complexity, which allows the algorithm itself to be exceedingly simple (a-la Turing Machine)

    Reliable Hardware Architectures for Cyrtographic Block Ciphers LED and HIGHT

    Get PDF
    Cryptographic architectures provide different security properties to sensitive usage models. However, unless reliability of architectures is guaranteed, such security properties can be undermined through natural or malicious faults. In this thesis, two underlying block ciphers which can be used in authenticated encryption algorithms are considered, i.e., LED and HIGHT block ciphers. The former is of the Advanced Encryption Standard (AES) type and has been considered areaefficient, while the latter constitutes a Feistel network structure and is suitable for low-complexity and low-power embedded security applications. In this thesis, we propose efficient error detection architectures including variants of recomputing with encoded operands and signature-based schemes to detect both transient and permanent faults. Authenticated encryption is applied in cryptography to provide confidentiality, integrity, and authenticity simultaneously to the message sent in a communication channel. In this thesis, we show that the proposed schemes are applicable to the case study of Simple Lightweight CFB (SILC) for providing authenticated encryption with associated data (AEAD). The error simulations are performed using Xilinx ISE tool and the results are benchmarked for the Xilinx FPGA family Virtex- 7 to assess the reliability capability and efficiency of the proposed architectures

    Applications of Locality and Asymmetry to Quantum Fault-Tolerance

    Full text link
    Quantum computing sounds like something out of a science-fiction novel. If we can exert control over unimaginably small systems, then we can harness their quantum mechanical behavior as a computational resource. This resource allows for astounding computational feats, and a new perspective on information-theory as a whole. But there's a caveat. The events we have to control are so fast and so small that they can hardly be said to have occurred at all. For a long time after Feynman's proposal and even still, there are some who believe that the barriers to controlling such events are fundamental. While we have yet to find anything insurmountable, the road is so pockmarked with challenges both experimental and theoretical that it is often difficult to see the road at all. Only a marriage of both engineering and theory in concert can hope to find the way forward. Quantum error-correction, and more broadly quantum fault-tolerance, is an unfinished answer to this question. It concerns the scaling of these microscopic systems into macroscopic regimes which we can fully control, straddling practical and theoretical considerations in its design. We will explore and prove several results on the theory of quantum fault-tolerance, but which are guided by the ultimate goal of realizing a physical quantum computer. In this thesis, we demonstrate applications of locality and asymmetry to quantum fault-tolerance. We introduce novel code families which we use to probe the behavior of thresholds in quantum subsystem codes. We also demonstrate codes in this family that are well-suited to efficiently correct asymmetric noise models, and determine their parameters. Next we show that quantum error-correcting encodings are incommensurate with transversal implementations of universal classical-reversible computation. Along the way, we resolve an open question concerning almost information-theoretically secure quantum fully homomorphic encryption, showing that it is impossible. Finally, we augment a framework for transversally mapping between stabilizer subspace codes, and discuss prospects for fault-tolerance.PHDMathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145948/1/mgnewman_1.pd

    On the semantic security of functional encryption schemes

    Get PDF
    Functional encryption (FE) is a powerful cryptographic primitive that generalizes many asymmetric encryption systems proposed in recent years. Syntax and security definitions for FE were proposed by Boneh, Sahai, and Waters (BSW) (TCC 2011) and independently by O’Neill (ePrint 2010/556). In this paper we revisit these definitions, identify several shortcomings in them, and propose a new definitional approach that overcomes these limitations. Our definitions display good compositionality properties and allow us to obtain new feasibility and impossibility results for adaptive token-extraction attack scenarios that shed further light on the potential reach of general FE for practical applications.ENIAC Joint UndertakingFundação para a Ciência e a Tecnologia (FCT

    Achievable secrecy enchancement through joint encryption and privacy amplification

    Get PDF
    In this dissertation we try to achieve secrecy enhancement in communications by resorting to both cryptographic and information theoretic secrecy tools and metrics. Our objective is to unify tools and measures from cryptography community with techniques and metrics from information theory community that are utilized to provide privacy and confidentiality in communication systems. For this purpose we adopt encryption techniques accompanied with privacy amplification tools in order to achieve secrecy goals that are determined based on information theoretic and cryptographic metrics. Every secrecy scheme relies on a certain advantage for legitimate users over adversaries viewed as an asymmetry in the system to deliver the required security for data transmission. In all of the proposed schemes in this dissertation, we resort to either inherently existing asymmetry in the system or proactively created advantage for legitimate users over a passive eavesdropper to further enhance secrecy of the communications. This advantage is manipulated by means of privacy amplification and encryption tools to achieve secrecy goals for the system evaluated based on information theoretic and cryptographic metrics. In our first work discussed in Chapter 2 and the third work explained in Chapter 4, we rely on a proactively established advantage for legitimate users based on eavesdropper’s lack of knowledge about a shared source of data. Unlike these works that assume an errorfree physical channel, in the second work discussed in Chapter 3 correlated erasure wiretap channel model is considered. This work relies on a passive and internally existing advantage for legitimate users that is built upon statistical and partial independence of eavesdropper’s channel errors from the errors in the main channel. We arrive at this secrecy advantage for legitimate users by exploitation of an authenticated but insecure feedback channel. From the perspective of the utilized tools, the first work discussed in Chapter 2 considers a specific scenario where secrecy enhancement of a particular block cipher called Data Encryption standard (DES) operating in cipher feedback mode (CFB) is studied. This secrecy enhancement is achieved by means of deliberate noise injection and wiretap channel encoding as a technique for privacy amplification against a resource constrained eavesdropper. Compared to the first work, the third work considers a more general framework in terms of both metrics and secrecy tools. This work studies secrecy enhancement of a general cipher based on universal hashing as a privacy amplification technique against an unbounded adversary. In this work, we have achieved the goal of exponential secrecy where information leakage to adversary, that is assessed in terms of mutual information as an information theoretic measure and Eve’s distinguishability as a cryptographic metric, decays at an exponential rate. In the second work generally encrypted data frames are transmitted through Automatic Repeat reQuest (ARQ) protocol to generate a common random source between legitimate users that later on is transformed into information theoretically secure keys for encryption by means of privacy amplification based on universal hashing. Towards the end, future works as an extension of the accomplished research in this dissertation are outlined. Proofs of major theorems and lemmas are presented in the Appendix

    Privacy-by-Design Regulatory Compliance Automation in Cloud Environment

    Get PDF
    The proposed Master's thesis revolves around the development of a privacy-preserving Attribute Verifier for regulatory compliance, first designed cryptographically, and then implemented in a Cloud Environment. The Attribute Verifier makes use of the Attribute Verification Protocol and its underlying encryption scheme, composed of Decentralized Attribute-Based Encryption (DABE) combined with a Zero- Knowledge Proof (ZKP) approach. The contribution of this work was integrating a ticketing system, concerning tickets of compliance, with the existing protocol, and automating the whole workflow, simulating all the actors involved, in AWS Cloud Environment. The major goal was to improve the security and privacy of sensitive data kept in the cloud as well as to comply with Cloud Regulatory, Standards, and different Data Protection Regulations. In particular, the use case covered in this Thesis refers to the General Protection Data Regulation (GDPR), specifically the compliance with Article 32. The word "Automation" in the title refers to the achievement of having automated in AWS Cloud Environment, through code, three main security objectives: Privacy, Identity and Access Management, and Attribute-based Access Control. A goal that was pursued because, in the majority of the cases, adherence to a Regulatory still requires heavy manual effort, especially when it's about pure Data Protection Regulations, i.e. in a legal setting. And when the manual effort is not required, confidentiality can be still heavily affected, and that's where the need for a privacy-by-design solution comes from. The Attribute Verifier was developed to verify the attributes of a Prover (e.g. a company, an institution, a healthcare provider, etc.) without revealing the actual attributes or assets and to grant access to encrypted data only if the verification is successful. The proposed example, among many applicable, it's the one a National Bank attempting to demonstrate to a Verifier, i.e. the European Central Bank, compliance with Article 32 of the GDPR

    Spooky Interaction and its Discontents: Compilers for Succinct Two-Message Argument Systems

    Get PDF
    We are interested in constructing short two-message arguments for various languages, where the complexity of the verifier is small (e.g. linear in the input size, or even sublinear if it is coded properly). Suppose that we have a low communication public-coin interactive protocol for proving (or arguing) membership in the language. We consider a ``compiler\u27\u27 from the literature that takes a protocol consisting of several rounds and produces a two-message argument system. The compiler is based on any Fully Homomorphic Encryption (FHE) scheme, or on PIR (under additional conditions on the protocol). This compiler has been used successfully in several proposed protocols. We investigate the question of whether this compiler can be proven to work under standard cryptographic assumptions. We prove: (i) If FHEs or PIR systems exist, then there is a sound interactive proof protocol that, when run through the compiler, results in a protocol that is not sound. (ii) If the verifier in the original protocol runs in logarithmic space and has no ``long-term\u27\u27 secret memory (a generalization of public coins), then the compiled protocol is sound. This yields a succinct two-message argument system for any language in NC, where the verifier\u27s work is linear (or even polylog if the input is coded appropriately). This is the first (non trivial) two-message succinct argument system that is based on a standard polynomial-time hardness assumption

    Shorter and Faster Post-Quantum Designated-Verifier zkSNARKs from Lattices

    Get PDF
    Zero-knowledge succinct arguments of knowledge (zkSNARKs) enable efficient privacy-preserving proofs of membership for general NP languages. Our focus in this work is on post-quantum zkSNARKs, with a focus on minimizing proof size. Currently, there is a 1000×1000\times gap in the proof size between the best pre-quantum constructions and the best post-quantum ones. Here, we develop and implement new lattice-based zkSNARKs in the designated-verifier preprocessing model. With our construction, after an initial preprocessing step, a proof for an NP relation of size 2202^{20} is just over 16 KB. Our proofs are 10.3×10.3\times shorter than previous post-quantum zkSNARKs for general NP languages. Compared to previous lattice-based zkSNARKs (also in the designated-verifier preprocessing model), we obtain a 42×42\times reduction in proof size and a 60×60\times reduction in the prover\u27s running time, all while achieving a much higher level of soundness. Compared to the shortest pre-quantum zkSNARKs by Groth (Eurocrypt 2016), the proof size in our lattice-based construction is 131×131\times longer, but both the prover and the verifier are faster (by 1.2×1.2\times and 2.8×2.8\times, respectively). Our construction follows the general blueprint of Bitansky et al. (TCC 2013) and Boneh et al. (Eurocrypt 2017) of combining a linear probabilistically checkable proof (linear PCP) together with a linear-only vector encryption scheme. We develop a concretely-efficient lattice-based instantiation of this compiler by considering quadratic extension fields of moderate characteristic and using linear-only vector encryption over rank-2 module lattices
    • …
    corecore