865 research outputs found
Simple, compact and robust approximate string dictionary
This paper is concerned with practical implementations of approximate string
dictionaries that allow edit errors. In this problem, we have as input a
dictionary of strings of total length over an alphabet of size
. Given a bound and a pattern of length , a query has to
return all the strings of the dictionary which are at edit distance at most
from , where the edit distance between two strings and is defined as
the minimum-cost sequence of edit operations that transform into . The
cost of a sequence of operations is defined as the sum of the costs of the
operations involved in the sequence. In this paper, we assume that each of
these operations has unit cost and consider only three operations: deletion of
one character, insertion of one character and substitution of a character by
another. We present a practical implementation of the data structure we
recently proposed and which works only for one error. We extend the scheme to
. Our implementation has many desirable properties: it has a very
fast and space-efficient building algorithm. The dictionary data structure is
compact and has fast and robust query time. Finally our data structure is
simple to implement as it only uses basic techniques from the literature,
mainly hashing (linear probing and hash signatures) and succinct data
structures (bitvectors supporting rank queries).Comment: Accepted to a journal (19 pages, 2 figures
Deterministic Computations on a PRAM with Static Processor and Memory Faults.
We consider Parallel Random Access Machine (PRAM) which has some processors
and memory cells faulty. The faults considered are static, i.e., once the
machine starts to operate, the operational/faulty status of PRAM components
does not change. We develop a deterministic simulation of a fully operational
PRAM on a similar faulty machine which has constant fractions of faults among
processors and memory cells. The simulating PRAM has processors and
memory cells, and simulates a PRAM with processors and a constant fraction
of memory cells. The simulation is in two phases: it starts with
preprocessing, which is followed by the simulation proper performed in a
step-by-step fashion. Preprocessing is performed in time . The slowdown of a step-by-step part of the simulation is
Improving post-quantum cryptography through cryptanalysis
Large quantum computers pose a threat to our public-key cryptographic infrastructure. The possible responses are:
Do nothing; accept the fact that quantum computers might be used to break widely deployed protocols.
Mitigate the threat by switching entirely to symmetric-key protocols.
Mitigate the threat by switching to different public-key protocols.
Each user of public-key cryptography will make one of these choices, and we should not expect consensus. Some users will do nothing---perhaps because they view the threat as being too remote. And some users will find that they never needed public-key cryptography in the first place.
The work that I present here is for people who need public-key cryptography and want to switch to new protocols. Each of the three articles raises the security estimate of a cryptosystem by showing that some attack is less effective than was previously believed. Each article thereby reduces the cost of using a protocol by letting the user choose smaller (or more efficient) parameters at a fixed level of security.
In Part 1, I present joint work with Samuel Jaques in which we revise security estimates for the Supersingular Isogeny Key Exchange (SIKE) protocol. We show that known quantum claw-finding algorithms do not outperform classical claw-finding algorithms. This allows us to recommend 434-bit primes for use in SIKE at the same security level that 503-bit primes had previously been recommended.
In Part 2, I present joint work with Martin Albrecht, Vlad Gheorghiu, and Eamonn Postelthwaite that examines the impact of quantum search on sieving algorithms for the shortest vector problem. Cryptographers commonly assume that the cost of solving the shortest vector problem in dimension is quantumly and classically. These are upper bounds based on a near neighbor search algorithm due to Becker--Ducas--Gama--Laarhoven. Naively, one might think that must be at least to avoid attacks that cost fewer than operations. Our analysis accounts for terms in the that were previously ignored. In a realistic model of quantum computation, we find that applying the Becker--Ducas--Gama--Laarhoven algorithm in dimension will cost more than operations. We also find reason to believe that the classical algorithm will outperform the quantum algorithm in dimensions .
In Part 3, I present solo work on a variant of post-quantum RSA. The original pqRSA proposal by Bernstein--Heninger--Lou--Valenta uses terabyte keys of the form where each is a -bit prime. My variant uses terabyte keys of the form where each is a -bit prime and is the -th prime. Prime generation is the most expensive part of post-quantum RSA in practice, so the smaller number of prime factors in my proposal gives a large speedup in key generation. The repeated factors help an attacker identify an element of small order, and thereby allow the attacker to use a small-order variant of Shor's algorithm. I analyze small-order attacks and discuss the cost of the classical pre-computation that they require
Numerical estimation of densities
[Abridged] We present a novel technique, dubbed FiEstAS, to estimate the
underlying density field from a discrete set of sample points in an arbitrary
multidimensional space. FiEstAS assigns a volume to each point by means of a
binary tree. Density is then computed by integrating over an adaptive kernel.
As a first test, we construct several Monte Carlo realizations of a Hernquist
profile and recover the particle density in both real and phase space. At a
given point, Poisson noise causes the unsmoothed estimates to fluctuate by a
factor ~2 regardless of the number of particles. This spread can be reduced to
about 1 dex (~26 per cent) by our smoothing procedure. [...] We conclude that
our algorithm accurately measure the phase-space density up to the limit where
discreteness effects render the simulation itself unreliable. Computationally,
FiEstAS is orders of magnitude faster than the method based on Delaunay
tessellation that Arad et al. employed, making it practicable to recover
smoothed density estimates for sets of 10^9 points in 6 dimensions.Comment: 12 pages, 18 figures, submitted to MNRAS. The code is available upon
reques
Quantum Cost Models for Cryptanalysis of Isogenies
Isogeny-based cryptography uses keys large enough to resist a far-future attack from
Tani’s algorithm, a quantum random walk on Johnson graphs. The key size is based on an
analysis in the query model. Queries do not reflect the full cost of an algorithm, and this
thesis considers other cost models. These models fit in a memory peripheral framework,
which focuses on the classical control costs of a quantum computer. Rather than queries,
we use the costs of individual gates, error correction, and latency. Primarily, these costs
make quantum memory access expensive and thus Tani’s memory-intensive algorithm is
no longer the best attack against isogeny-based cryptography. A classical algorithm due to
van Oorschot and Wiener can be faster and cheaper, depending on the model used and the
availability of time and hardware. This means that isogeny-based cryptography is more
secure than previously thought
- …