40 research outputs found
TurboSHAKE
In a recent presentation, we promoted the use of 12-round instances of Keccak, collectively called “TurboSHAKE”, in post-quantum cryptographic schemes, but without defining them further. The goal of this note is to fill this gap: The definition of the TurboSHAKE family simply consists in exposing and generalizing the primitive already defined inside KangarooTwelve
ERINYES: A CONTINUOUS AUTHENTICATION PROTOCOL
The need for user authentication in the digital domain is paramount as the number of digital interactions that involve sensitive data continues to increase. Advances in the fields of machine learning (ML) and biometric encryption have enabled the development of technologies that can provide fully remote continuous user authentication services. This thesis introduces the Erinyes protocol. The protocol leverages state of the art ML models, biometric encryption of asymmetric cryptographic keys, and a trusted third-party client-server architecture to continuously authenticate users through their behavioral biometrics. The goals in developing the protocol were to identify if biometric encryption using keystroke timing and mouse cursor movement sequences were feasible and to measure the performance of a continuous authentication system that utilizes biometric encryption. Our research found that with a combined keystroke and mouse cursor movement dataset, the biometric encryption system can perform with a 0.93% False Acceptance Rate (FAR), 0.00% False Reject Rate (FRR), and 99.07% accuracy. Using a similar dataset, the overall integrated system averaged 0% FAR, 2% FRR and 98% accuracy across multiple users. These metrics demonstrate that the Erinyes protocol can achieve continuous user authentication with minimal user intrusion.Lieutenant, United States NavyLieutenant, United States NavyApproved for public release. Distribution is unlimited
SALSA VERDE: a machine learning attack on Learning With Errors with sparse small secrets
Learning with Errors (LWE) is a hard math problem used in post-quantum
cryptography. Homomorphic Encryption (HE) schemes rely on the hardness of the
LWE problem for their security, and two LWE-based cryptosystems were recently
standardized by NIST for digital signatures and key exchange (KEM). Thus, it is
critical to continue assessing the security of LWE and specific parameter
choices. For example, HE uses secrets with small entries, and the HE community
has considered standardizing small sparse secrets to improve efficiency and
functionality. However, prior work, SALSA and PICANTE, showed that ML attacks
can recover sparse binary secrets. Building on these, we propose VERDE, an
improved ML attack that can recover sparse binary, ternary, and narrow Gaussian
secrets. Using improved preprocessing and secret recovery techniques, VERDE can
attack LWE with larger dimensions () and smaller moduli (
for ), using less time and power. We propose novel architectures for
scaling. Finally, we develop a theory that explains the success of ML LWE
attacks.Comment: 18 pages, accepted to NeurIPS 202
SALSA: Attacking Lattice Cryptography with Transformers
Currently deployed public-key cryptosystems will be vulnerable to attacks by full- scale quantum computers. Consequently, quantum resistant cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged as strong contenders for standardization. In this work, we train transformers to perform modular arithmetic and combine half-trained models with statistical cryptanalysis techniques to propose SALSA: a machine learning attack on LWE-based cryptographic schemes. SALSA can fully recover secrets for small-to-mid size LWE instances with sparse binary secrets, and may scale to attack real-world LWE-based cryptosystems
Physical Unclonability Framework for the Internet of Things
Ph. D. ThesisThe rise of the Internet of Things (IoT) creates a tendency to construct unified architectures
with a great number of edge nodes and inherent security risks due to centralisation.
At the same time, security and privacy defenders advocate for decentralised solutions
which divide the control and the responsibility among the entirety of the network nodes.
However, spreading secrets among several parties also expands the attack surface.
This conflict is in part due to the difficulty in differentiating between instances of the
same hardware, which leads to treating physically distinct devices as identical. Harnessing
the uniqueness of each connected device and injecting it into security protocols can provide
solutions to several common issues of the IoT. Secrets can be generated directly from this
uniqueness without the need to manually embed them into devices, reducing both the risk
of exposure and the cost of managing great numbers of devices.
Uniqueness can then lead to the primitive of unclonability. Unclonability refers to
ensuring the difficulty of producing an exact duplicate of an entity via observing and
measuring the entity’s features and behaviour. Unclonability has been realised on a physical
level via the use of Physical Unclonable Functions (PUFs). PUFs are constructions
that extract the inherent unclonable features of objects and compound them into a usable
form, often that of binary data. PUFs are also exceptionally useful in IoT applications
since they are low-cost, easy to integrate into existing designs, and have the potential to
replace expensive cryptographic operations. Thus, a great number of solutions have been
developed to integrate PUFs in various security scenarios. However, methods to expand
unclonability into a complete security framework have not been thoroughly studied.
In this work, the foundations are set for the development of such a framework through
the formulation of an unclonability stack, in the paradigm of the OSI reference model. The
stack comprises layers propagating the primitive from the unclonable PUF ICs, to devices,
network links and eventually unclonable systems. Those layers are introduced, and work
towards the design of protocols and methods for several of the layers is presented.
A collection of protocols based on one or more unclonable tokens or authority devices
is proposed, to enable the secure introduction of network nodes into groups or neighbourhoods.
The role of the authority devices is that of a consolidated, observable root of
ownership, whose physical state can be verified. After their introduction, nodes are able
to identify and interact with their peers, exchange keys and form relationships, without
the need of continued interaction with the authority device.
Building on this introduction scheme, methods for establishing and maintaining unclonable
links between pairs of nodes are introduced. These pairwise links are essential for
the construction of relationships among multiple network nodes, in a variety of topologies.
Those topologies and the resulting relationships are formulated and discussed.
While the framework does not depend on specific PUF hardware, SRAM PUFs are
chosen as a case study since they are commonly used and based on components that
are already present in the majority of IoT devices. In the context of SRAM PUFs and
with a view to the proposed framework, practical issues affecting the adoption of PUFs in
security protocols are discussed. Methods of improving the capabilities of SRAM PUFs
are also proposed, based on experimental data.School of Engineering Newcastle Universit
Algorithms for Solving Linear and Polynomial Systems of Equations over Finite Fields with Applications to Cryptanalysis
This dissertation contains algorithms for solving linear and polynomial systems
of equations over GF(2). The objective is to provide fast and exact tools for algebraic
cryptanalysis and other applications. Accordingly, it is divided into two parts.
The first part deals with polynomial systems. Chapter 2 contains a successful
cryptanalysis of Keeloq, the block cipher used in nearly all luxury automobiles.
The attack is more than 16,000 times faster than brute force, but queries 0.62 × 2^32
plaintexts. The polynomial systems of equations arising from that cryptanalysis
were solved via SAT-solvers. Therefore, Chapter 3 introduces a new method of
solving polynomial systems of equations by converting them into CNF-SAT problems
and using a SAT-solver. Finally, Chapter 4 contains a discussion on how SAT-solvers
work internally.
The second part deals with linear systems over GF(2), and other small fields
(and rings). These occur in cryptanalysis when using the XL algorithm, which converts polynomial systems into larger linear systems. We introduce a new complexity
model and data structures for GF(2)-matrix operations. This is discussed in Appendix B but applies to all of Part II. Chapter 5 contains an analysis of "the Method
of Four Russians" for multiplication and a variant for matrix inversion, which is
log n faster than Gaussian Elimination, and can be combined with Strassen-like algorithms. Chapter 6 contains an algorithm for accelerating matrix multiplication
over small finite fields. It is feasible but the memory cost is so high that it is mostly
of theoretical interest. Appendix A contains some discussion of GF(2)-linear algebra
and how it differs from linear algebra in R and C. Appendix C discusses algorithms
faster than Strassen's algorithm, and contains proofs that matrix multiplication,
matrix squaring, triangular matrix inversion, LUP-factorization, general matrix in-
version and the taking of determinants, are equicomplex. These proofs are already
known, but are here gathered into one place in the same notation