143 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
PQC Cloudization: Rapid Prototyping of Scalable NTT/INTT Architecture to Accelerate Kyber
The advent of quantum computers poses a serious challenge to the security of cloud infrastructures and services, as they can potentially break the existing public-key cryptosystems, such as Rivest–Shamir–Adleman (RSA) and Elliptic Curve Cryptography (ECC). Even though the gap between today’s quantum computers and the threats they pose to current public-key cryptography is large, the cloud landscape should act proactively and initiate the transition to the post-quantum era as early as possible. To comply with that, the U.S. government issued a National Security Memorandum in May 2022 that mandated federal agencies to migrate to post-quantum cryptosystems (PQC) by 2035. To ensure the long-term security of cloud computing, it is imperative to develop and deploy PQC resistant to quantum attacks. A promising class of post-quantum cryptosystems is based on lattice problems, which require polynomial arithmetic.
In this paper, we propose and implement a scalable number-theoretic transform (NTT) architecture that significantly enhances the performance of polynomial multiplication. Our proposed design exploits multi-levels of parallelism to accelerate the NTT computation on reconfigurable hardware. We use the high-level synthesis (HLS) method to implement our design, which allows us to describe the NTT algorithm in a high-level language and automatically generate optimized hardware code. HLS facilitates rapid prototyping and enables us to explore different design spaces and trade-offs on the hardware platforms.
Our experimental results show that our design achieves 11 speedup compared to the state-of-the-art requiring only 14 clock cycles for an NTT computation over a polynomial of degree 256. To demonstrate the applicability of our design, we also present a coprocessor architecture for Kyber, a key encapsulation mechanism (KEM) chosen by the NIST post-quantum standardization process, that utilizes our scalable NTT core
Decryption Failure Attacks on Post-Quantum Cryptography
This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results
Efficient and Side-Channel Resistant Implementations of Next-Generation Cryptography
The rapid development of emerging information technologies, such as quantum computing and the Internet of Things (IoT), will have or have already had a huge impact on the world. These technologies can not only improve industrial productivity but they could also bring more convenience to people’s daily lives. However, these techniques have “side effects” in the world of cryptography – they pose new difficulties and challenges from theory to practice. Specifically, when quantum computing capability (i.e., logical qubits) reaches a certain level, Shor’s algorithm will be able to break almost all public-key cryptosystems currently in use. On the other hand, a great number of devices deployed in IoT environments have very constrained computing and storage resources, so the current widely-used cryptographic algorithms may not run efficiently on those devices. A new generation of cryptography has thus emerged, including Post-Quantum Cryptography (PQC), which remains secure under both classical and quantum attacks, and LightWeight Cryptography (LWC), which is tailored for resource-constrained devices. Research on next-generation cryptography is of importance and utmost urgency, and the US National Institute of Standards and Technology in particular has initiated the standardization process for PQC and LWC in 2016 and in 2018 respectively.
Since next-generation cryptography is in a premature state and has developed rapidly in recent years, its theoretical security and practical deployment are not very well explored and are in significant need of evaluation. This thesis aims to look into the engineering aspects of next-generation cryptography, i.e., the problems concerning implementation efficiency (e.g., execution time and memory consumption) and security (e.g., countermeasures against timing attacks and power side-channel attacks). In more detail, we first explore efficient software implementation approaches for lattice-based PQC on constrained devices. Then, we study how to speed up isogeny-based PQC on modern high-performance processors especially by using their powerful vector units. Moreover, we research how to design sophisticated yet low-area instruction set extensions to further accelerate software implementations of LWC and long-integer-arithmetic-based PQC. Finally, to address the threats from potential power side-channel attacks, we present a concept of using special leakage-aware instructions to eliminate overwriting leakage for masked software implementations (of next-generation cryptography)
Verifiable Mix-Nets and Distributed Decryption for Voting from Lattice-Based Assumptions
Cryptographic voting protocols have recently seen much interest from practitioners due to their (planned) use in countries such as Estonia, Switzerland, France, and Australia. Practical protocols usually rely on tested designs such as the mixing-and-decryption paradigm. There, multiple servers verifiably shuffle encrypted ballots, which are then decrypted in a distributed manner. While several efficient protocols implementing this paradigm exist from discrete log-type assumptions, the situation is less clear for post-quantum alternatives such as lattices. This is because the design ideas of the discrete log-based voting protocols do not carry over easily to the lattice setting, due to specific problems such as noise growth and approximate relations.
In this work, we propose a new verifiable secret shuffle for BGV ciphertexts and a compatible verifiable distributed decryption protocol. The shuffle is based on an extension of a shuffle of commitments to known values which is combined with an amortized proof of correct re-randomization. The verifiable distributed decryption protocol uses noise drowning, proving the correctness of decryption steps in zero-knowledge. Both primitives are then used to instantiate the mixing-and-decryption electronic voting paradigm from lattice-based assumptions.
We give concrete parameters for our system, estimate the size of each component and provide implementations of all important sub-protocols. Our experiments show that the shuffle and decryption protocol is suitable for use in real-world e-voting schemes
Towards Improved Homomorphic Encryption for Privacy-Preserving Deep Learning
Mención Internacional en el título de doctorDeep Learning (DL) has supposed a remarkable transformation for many fields, heralded
by some as a new technological revolution. The advent of large scale models has increased
the demands for data and computing platforms, for which cloud computing has become
the go-to solution. However, the permeability of DL and cloud computing are reduced
in privacy-enforcing areas that deal with sensitive data. These areas imperatively call for
privacy-enhancing technologies that enable responsible, ethical, and privacy-compliant
use of data in potentially hostile environments.
To this end, the cryptography community has addressed these concerns with what
is known as Privacy-Preserving Computation Techniques (PPCTs), a set of tools that
enable privacy-enhancing protocols where cleartext access to information is no longer
tenable. Of these techniques, Homomorphic Encryption (HE) stands out for its ability
to perform operations over encrypted data without compromising data confidentiality or
privacy. However, despite its promise, HE is still a relatively nascent solution with efficiency
and usability limitations. Improving the efficiency of HE has been a longstanding
challenge in the field of cryptography, and with improvements, the complexity of the
techniques has increased, especially for non-experts.
In this thesis, we address the problem of the complexity of HE when applied to DL.
We begin by systematizing existing knowledge in the field through an in-depth analysis
of state-of-the-art for privacy-preserving deep learning, identifying key trends, research
gaps, and issues associated with current approaches. One such identified gap lies in the
necessity for using vectorized algorithms with Packed Homomorphic Encryption (PaHE),
a state-of-the-art technique to reduce the overhead of HE in complex areas. This thesis
comprehensively analyzes existing algorithms and proposes new ones for using DL with
PaHE, presenting a formal analysis and usage guidelines for their implementation.
Parameter selection of HE schemes is another recurring challenge in the literature,
given that it plays a critical role in determining not only the security of the instantiation
but also the precision, performance, and degree of security of the scheme. To address
this challenge, this thesis proposes a novel system combining fuzzy logic with linear
programming tasks to produce secure parametrizations based on high-level user input
arguments without requiring low-level knowledge of the underlying primitives.
Finally, this thesis describes HEFactory, a symbolic execution compiler designed to
streamline the process of producing HE code and integrating it with Python. HEFactory
implements the previous proposals presented in this thesis in an easy-to-use tool. It provides
a unique architecture that layers the challenges associated with HE and produces
simplified operations interpretable by low-level HE libraries. HEFactory significantly reduces
the overall complexity to code DL applications using HE, resulting in an 80% length
reduction from expert-written code while maintaining equivalent accuracy and efficiency.El aprendizaje profundo ha supuesto una notable transformación para muchos campos
que algunos han calificado como una nueva revolución tecnológica. La aparición de modelos
masivos ha aumentado la demanda de datos y plataformas informáticas, para lo cual,
la computación en la nube se ha convertido en la solución a la que recurrir. Sin embargo,
la permeabilidad del aprendizaje profundo y la computación en la nube se reduce en los
ámbitos de la privacidad que manejan con datos sensibles. Estas áreas exigen imperativamente
el uso de tecnologías de mejora de la privacidad que permitan un uso responsable,
ético y respetuoso con la privacidad de los datos en entornos potencialmente hostiles.
Con este fin, la comunidad criptográfica ha abordado estas preocupaciones con las
denominadas técnicas de la preservación de la privacidad en el cómputo, un conjunto de
herramientas que permiten protocolos de mejora de la privacidad donde el acceso a la información
en texto claro ya no es sostenible. Entre estas técnicas, el cifrado homomórfico
destaca por su capacidad para realizar operaciones sobre datos cifrados sin comprometer
la confidencialidad o privacidad de la información. Sin embargo, a pesar de lo prometedor
de esta técnica, sigue siendo una solución relativamente incipiente con limitaciones
de eficiencia y usabilidad. La mejora de la eficiencia del cifrado homomórfico en la
criptografía ha sido todo un reto, y, con las mejoras, la complejidad de las técnicas ha
aumentado, especialmente para los usuarios no expertos.
En esta tesis, abordamos el problema de la complejidad del cifrado homomórfico
cuando se aplica al aprendizaje profundo. Comenzamos sistematizando el conocimiento
existente en el campo a través de un análisis exhaustivo del estado del arte para el aprendizaje
profundo que preserva la privacidad, identificando las tendencias clave, las lagunas
de investigación y los problemas asociados con los enfoques actuales. Una de las
lagunas identificadas radica en el uso de algoritmos vectorizados con cifrado homomórfico
empaquetado, que es una técnica del estado del arte que reduce el coste del cifrado
homomórfico en áreas complejas. Esta tesis analiza exhaustivamente los algoritmos existentes
y propone nuevos algoritmos para el uso de aprendizaje profundo utilizando cifrado
homomórfico empaquetado, presentando un análisis formal y unas pautas de uso para su
implementación.
La selección de parámetros de los esquemas del cifrado homomórfico es otro reto recurrente
en la literatura, dado que juega un papel crítico a la hora de determinar no sólo la
seguridad de la instanciación, sino también la precisión, el rendimiento y el grado de seguridad del esquema. Para abordar este reto, esta tesis propone un sistema innovador que
combina la lógica difusa con tareas de programación lineal para producir parametrizaciones
seguras basadas en argumentos de entrada de alto nivel sin requerir conocimientos
de bajo nivel de las primitivas subyacentes.
Por último, esta tesis propone HEFactory, un compilador de ejecución simbólica diseñado
para agilizar el proceso de producción de código de cifrado homomórfico e integrarlo
con Python. HEFactory es la culminación de las propuestas presentadas en esta
tesis, proporcionando una arquitectura única que estratifica los retos asociados con el
cifrado homomórfico, produciendo operaciones simplificadas que pueden ser interpretadas
por bibliotecas de bajo nivel. Este enfoque permite a HEFactory reducir significativamente
la longitud total del código, lo que supone una reducción del 80% en la
complejidad de programación de aplicaciones de aprendizaje profundo que usan cifrado
homomórfico en comparación con el código escrito por expertos, manteniendo una precisión
equivalente.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidenta: María Isabel González Vasco.- Secretario: David Arroyo Guardeño.- Vocal: Antonis Michala
Full Text PDF of The JBBA, 11th Issue, May 2023
Full Text PDF of The JBBA, 11th Issue, May 202
Efficient Accelerator for NTT-based Polynomial Multiplication
The Number Theoretic Transform (NTT) is used to efficiently execute polynomial multiplication. It has become an important part of lattice-based post-quantum methods and the subsequent generation of standard cryptographic systems. However, implementing post-quantum schemes is challenging
since they rely on intricate structures. This paper demonstrates how to develop a high-speed NTT multiplier highly optimized
for FPGAs with few logical resources. We describe a novel architecture for NTT that leverages unique precomputation. Our method efficiently maps these specific pre-computed values into the built-in Block RAMs (BRAMs), which greatly reduces the area and time required for implementation when compared to previous works. We have chosen Kyber parameters to implement the proposed architectures. Compared to the most well-known approach for implementing Kyber’s polynomial multiplication using NTT, the time is reduced by 31%, and AT (area × time) is improved by 25% as a result of the pre computation we suggest in this study. It is worth mentioning that we obtained these improvements while our method does not require DSP
- …