51 research outputs found

### Certified lattice reduction

Quadratic form reduction and lattice reduction are fundamental tools in
computational number theory and in computer science, especially in
cryptography. The celebrated Lenstra-Lenstra-Lov\'asz reduction algorithm
(so-called LLL) has been improved in many ways through the past decades and
remains one of the central methods used for reducing integral lattice basis. In
particular, its floating-point variants-where the rational arithmetic required
by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic-are
now the fastest known. However, the systematic study of the reduction theory of
real quadratic forms or, more generally, of real lattices is not widely
represented in the literature. When the problem arises, the lattice is usually
replaced by an integral approximation of (a multiple of) the original lattice,
which is then reduced. While practically useful and proven in some special
cases, this method doesn't offer any guarantee of success in general. In this
work, we present an adaptive-precision version of a generalized LLL algorithm
that covers this case in all generality. In particular, we replace
floating-point arithmetic by Interval Arithmetic to certify the behavior of the
algorithm. We conclude by giving a typical application of the result in
algebraic number theory for the reduction of ideal lattices in number fields.Comment: 23 page

### Synthesizing Probabilistic Invariants via Doob's Decomposition

When analyzing probabilistic computations, a powerful approach is to first
find a martingale---an expression on the program variables whose expectation
remains invariant---and then apply the optional stopping theorem in order to
infer properties at termination time. One of the main challenges, then, is to
systematically find martingales.
We propose a novel procedure to synthesize martingale expressions from an
arbitrary initial expression. Contrary to state-of-the-art approaches, we do
not rely on constraint solving. Instead, we use a symbolic construction based
on Doob's decomposition. This procedure can produce very complex martingales,
expressed in terms of conditional expectations.
We show how to automatically generate and simplify these martingales, as well
as how to apply the optional stopping theorem to infer properties at
termination time. This last step typically involves some simplification steps,
and is usually done manually in current approaches. We implement our techniques
in a prototype tool and demonstrate our process on several classical examples.
Some of them go beyond the capability of current semi-automatic approaches

### The nearest-colattice algorithm

In this work, we exhibit a hierarchy of polynomial time algorithms solving
approximate variants of the Closest Vector Problem (CVP). Our first
contribution is a heuristic algorithm achieving the same distance tradeoff as
HSVP algorithms, namely $\approx
\beta^{\frac{n}{2\beta}}\textrm{covol}(\Lambda)^{\frac{1}{n}}$ for a random
lattice $\Lambda$ of rank $n$. Compared to the so-called Kannan's embedding
technique, our algorithm allows using precomputations and can be used for
efficient batch CVP instances. This implies that some attacks on lattice-based
signatures lead to very cheap forgeries, after a precomputation. Our second
contribution is a proven reduction from approximating the closest vector with a
factor $\approx n^{\frac32}\beta^{\frac{3n}{2\beta}}$ to the Shortest Vector
Problem (SVP) in dimension $\beta$.Comment: 19 pages, presented at the Algorithmic Number Theory Symposium (ANTS
2020

### Proving uniformity and independence by self-composition and coupling

Proof by coupling is a classical proof technique for establishing
probabilistic properties of two probabilistic processes, like stochastic
dominance and rapid mixing of Markov chains. More recently, couplings have been
investigated as a useful abstraction for formal reasoning about relational
properties of probabilistic programs, in particular for modeling
reduction-based cryptographic proofs and for verifying differential privacy. In
this paper, we demonstrate that probabilistic couplings can be used for
verifying non-relational probabilistic properties. Specifically, we show that
the program logic pRHL---whose proofs are formal versions of proofs by
coupling---can be used for formalizing uniformity and probabilistic
independence. We formally verify our main examples using the EasyCrypt proof
assistant

### Recursive lattice reduction -- A framework for finding short lattice vectors

We propose a new framework called recursive lattice reduction for finding
short non-zero vectors in a lattice or for finding dense sublattices of a
lattice. At a high level, the framework works by recursively searching for
dense sublattices of dense sublattices (or their duals). Eventually, the
procedure encounters a recursive call on a lattice $\mathcal{L}$ with
relatively low rank $k$, at which point we simply use a known algorithm to find
a short non-zero vector in $\mathcal{L}$. We view our framework as
complementary to basis reduction algorithms, which similarly work to reduce an
$n$-dimensional lattice problem with some approximation factor $\gamma$ to an
exact lattice problem in dimension $k < n$, with a tradeoff between $\gamma$,
$n$, and $k$. Our framework provides an alternative and arguably simpler
perspective, which in particular can be described without explicitly
referencing any specific basis of the lattice, Gram-Schmidt vectors, or even
projection (though implementations of algorithms in this framework will likely
make use of such things). We present a number of specific instantiations of our
framework. Our main concrete result is a reduction that matches the tradeoff
between $\gamma$, $n$, and $k$ achieved by the best-known basis reduction
algorithms (in terms of the Hermite factor, up to low-order terms) across all
parameter regimes. In fact, this reduction also can be used to find dense
sublattices with any rank $\ell$ satisfying $\min\{\ell,n-\ell\} \leq n-k+1$,
using only an oracle for SVP (or even just Hermite SVP) in $k$ dimensions,
which is itself a novel result (as far as the authors know). We also show a
very simple reduction that achieves the same tradeoff in quasipolynomial time.
Finally, we present an automated approach for searching for algorithms in this
framework that (provably) achieve better approximations with fewer oracle
calls

### Flood and Submerse: Distributed Key Generation and Robust Threshold Signature from Lattices

We propose a new framework based on random submersions — that is projection over a random subspace blinded by a small Gaussian noise — for constructing verifiable short secret sharing and showcase it to construct efficient threshold lattice-based signatures in the hash-and-sign paradigm, when based on noise flooding. This is, to our knowledge, the first hash-and-sign lattice-based threshold signature. Our threshold signature enjoys the very desirable property of robustness, including at key generation. In practice, we are able to construct a robust hash-and-sign threshold signature for threshold and provide a typical parameter set for threshold T = 16 and signature size 13kB. Our constructions are provably secure under standard MLWE assumption in the ROM and only require basic primitives as building blocks. In particular, we do not rely on FHE-type schemes

### *-Liftings for Differential Privacy

Recent developments in formal verification have identified approximate liftings (also known as approximate couplings) as a clean, compositional abstraction for proving differential privacy. There are two styles of definitions for this construction. Earlier definitions require the existence of one or more witness distributions, while a recent definition by Sato uses universal quantification over all sets of samples. These notions have different strengths and weaknesses: the universal version is more general than the existential ones, but the existential versions enjoy more precise composition principles.
We propose a novel, existential version of approximate lifting, called *-lifting, and show that it is equivalent to Sato\u27s construction for discrete probability measures. Our work unifies all known notions of approximate lifting, giving cleaner properties, more general constructions, and more precise composition theorems for both styles of lifting, enabling richer proofs of differential privacy. We also clarify the relation between existing definitions of approximate lifting, and generalize our constructions to approximate liftings based on f-divergences

### Two-Round Threshold Signature from Algebraic One-More Learning with Errors

Threshold signatures have recently seen a renewed interest due to applications in cryptocurrency while NIST has released a call for multi-party threshold schemes, with a deadline for submission expected for the first half of 2025. So far, all lattice-based threshold signatures requiring less than two-rounds are based on heavy tools such as (fully) homomorphic encryption (FHE) and homomorphic trapdoor commitments (HTDC). This is not unexpected considering that most efficient two-round signatures from classical assumptions either rely on idealized model such as algebraic group models or on one-more type assumptions, none of which we have a nice analogue in the lattice world.
In this work, we construct the first efficient two-round lattice-based threshold signature without relying on FHE or HTDC. It has an offline-online feature where the first round can be preprocessed without knowing message or the signer sets, effectively making the signing phase non-interactive. The signature size is small and shows great scalability. For example, even for a threshold as large as 1024 signers, we achieve a signature size roughly 11 KB. At the heart of our construction is a new lattice-based assumption called the algebraic one-more learning with errors (AOMMLWE) assumption. We believe this to be a strong inclusion to our lattice toolkits with an independent interest. We establish the selective security of AOMMLWE based on the standard MLWE and MSIS assumptions, and provide an in depth analysis of its adaptive security, which our threshold signature is based on

### Quantum binary quadratic form reduction

Quadratic form reduction enjoys broad uses both in classical and quantum algorithms
such as in the celebrated LLL algorithm for lattice reduction. In this paper, we propose the first quantum
circuit for definite binary quadratic form reduction that achieves O(n log n) depth, O(n^2)
width and O(n^2 log(n)) quantum gates. The proposed work is based on a
binary variant of the reduction algorithm of the definite quadratic form. As
side results, we show a quantum circuit performing bit rotation
with O(log n) depth, O(n) width, and O(n log n) gates, in addition to a circuit performing
integer logarithm computation with O(log n) depth, O(n) width, and O(n) gates

### Algebraic and Euclidean Lattices: Optimal Lattice Reduction and Beyond

We introduce a framework generalizing lattice reduction algorithms to module
lattices in order to practically and efficiently solve the $\gamma$-Hermite
Module-SVP problem over arbitrary cyclotomic fields. The core idea is to
exploit the structure of the subfields for designing a doubly-recursive
strategy of reduction: both recursive in the rank of the module and in the
field we are working in. Besides, we demonstrate how to leverage the inherent
symplectic geometry existing in the tower of fields to provide a significant
speed-up of the reduction for rank two modules. The recursive strategy over the
rank can also be applied to the reduction of Euclidean lattices, and we can
perform a reduction in asymptotically almost the same time as matrix
multiplication. As a byproduct of the design of these fast reductions, we also
generalize to all cyclotomic fields and provide speedups for many previous
number theoretical algorithms. Quantitatively, we show that a module of rank 2
over a cyclotomic field of degree $n$ can be heuristically reduced within
approximation factor $2^{\tilde{O}(n)}$ in time $\tilde{O}(n^2B)$, where $B$ is
the bitlength of the entries. For $B$ large enough, this complexity shrinks to
$\tilde{O}(n^{\log_2 3}B)$. This last result is particularly striking as it
goes below the estimate of $n^2B$ swaps given by the classical analysis of the
LLL algorithm using the so-called potential

- …