2,946 research outputs found

    The Interpolating Random Spline Cryptosystem and the Chaotic-Map Public-Key Cryptosystem

    Get PDF
    The feasibility of implementing the interpolating cubic spline function as encryption and decryption transformations is presented. The encryption method can be viewed as computing a transposed polynomial. The main characteristic of the spline cryptosystem is that the domain and range of encryption are defined over real numbers, instead of the traditional integer numbers. Moreover, the spline cryptosystem can be implemented in terms of inexpensive multiplications and additions. Using spline functions, a series of discontiguous spline segments can execute the modular arithmetic of the RSA system. The similarity of the RSA and spline functions within the integer domain is demonstrated. Furthermore, we observe that such a reformulation of RSA cryptosystem can be characterized as polynomials with random offsets between ciphertext values and plaintext values. This contrasts with the spline cryptosystems, so that a random spline system has been developed. The random spline cryptosystem is an advanced structure of spline cryptosystem. Its mathematical indeterminacy on computing keys with interpolants no more than 4 and numerical sensitivity to the random offset t( increases its utility. This article also presents a chaotic public-key cryptosystem employing a one-dimensional difference equation as well as a quadratic difference equation. This system makes use of the El Gamal’s scheme to accomplish the encryption process. We note that breaking this system requires the identical work factor that is needed in solving discrete logarithm with the same size of moduli

    A Survey on Homomorphic Encryption Schemes: Theory and Implementation

    Full text link
    Legacy encryption systems depend on sharing a key (public or private) among the peers involved in exchanging an encrypted message. However, this approach poses privacy concerns. Especially with popular cloud services, the control over the privacy of the sensitive data is lost. Even when the keys are not shared, the encrypted material is shared with a third party that does not necessarily need to access the content. Moreover, untrusted servers, providers, and cloud operators can keep identifying elements of users long after users end the relationship with the services. Indeed, Homomorphic Encryption (HE), a special kind of encryption scheme, can address these concerns as it allows any third party to operate on the encrypted data without decrypting it in advance. Although this extremely useful feature of the HE scheme has been known for over 30 years, the first plausible and achievable Fully Homomorphic Encryption (FHE) scheme, which allows any computable function to perform on the encrypted data, was introduced by Craig Gentry in 2009. Even though this was a major achievement, different implementations so far demonstrated that FHE still needs to be improved significantly to be practical on every platform. First, we present the basics of HE and the details of the well-known Partially Homomorphic Encryption (PHE) and Somewhat Homomorphic Encryption (SWHE), which are important pillars of achieving FHE. Then, the main FHE families, which have become the base for the other follow-up FHE schemes are presented. Furthermore, the implementations and recent improvements in Gentry-type FHE schemes are also surveyed. Finally, further research directions are discussed. This survey is intended to give a clear knowledge and foundation to researchers and practitioners interested in knowing, applying, as well as extending the state of the art HE, PHE, SWHE, and FHE systems.Comment: - Updated. (October 6, 2017) - This paper is an early draft of the survey that is being submitted to ACM CSUR and has been uploaded to arXiv for feedback from stakeholder

    How to securely replicate services (preliminary version)

    Get PDF
    A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients being corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by 'n' servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter, k, at least k servers are correct and fewer than k servers are correct. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires that fewer than k servers are corrupt and, to ensure liveness, that k is less than or = n - 2t, where t is the assumed maximum total number of both corruptions and benign failures suffered by servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service
    • …
    corecore