2,128 research outputs found
Polynomial-Time Key Recovery Attack on the Faure-Loidreau Scheme based on Gabidulin Codes
Encryption schemes based on the rank metric lead to small public key sizes of
order of few thousands bytes which represents a very attractive feature
compared to Hamming metric-based encryption schemes where public key sizes are
of order of hundreds of thousands bytes even with additional structures like
the cyclicity. The main tool for building public key encryption schemes in rank
metric is the McEliece encryption setting used with the family of Gabidulin
codes. Since the original scheme proposed in 1991 by Gabidulin, Paramonov and
Tretjakov, many systems have been proposed based on different masking
techniques for Gabidulin codes. Nevertheless, over the years all these systems
were attacked essentially by the use of an attack proposed by Overbeck.
In 2005 Faure and Loidreau designed a rank-metric encryption scheme which was
not in the McEliece setting. The scheme is very efficient, with small public
keys of size a few kiloBytes and with security closely related to the
linearized polynomial reconstruction problem which corresponds to the decoding
problem of Gabidulin codes. The structure of the scheme differs considerably
from the classical McEliece setting and until our work, the scheme had never
been attacked. We show in this article that this scheme like other schemes
based on Gabidulin codes, is also vulnerable to a polynomial-time attack that
recovers the private key by applying Overbeck's attack on an appropriate public
code. As an example we break concrete proposed bits security parameters in
a few seconds.Comment: To appear in Designs, Codes and Cryptography Journa
Conditionals in Homomorphic Encryption and Machine Learning Applications
Homomorphic encryption aims at allowing computations on encrypted data
without decryption other than that of the final result. This could provide an
elegant solution to the issue of privacy preservation in data-based
applications, such as those using machine learning, but several open issues
hamper this plan. In this work we assess the possibility for homomorphic
encryption to fully implement its program without relying on other techniques,
such as multiparty computation (SMPC), which may be impossible in many use
cases (for instance due to the high level of communication required). We
proceed in two steps: i) on the basis of the structured program theorem
(Bohm-Jacopini theorem) we identify the relevant minimal set of operations
homomorphic encryption must be able to perform to implement any algorithm; and
ii) we analyse the possibility to solve -- and propose an implementation for --
the most fundamentally relevant issue as it emerges from our analysis, that is,
the implementation of conditionals (requiring comparison and selection/jump
operations). We show how this issue clashes with the fundamental requirements
of homomorphic encryption and could represent a drawback for its use as a
complete solution for privacy preservation in data-based applications, in
particular machine learning ones. Our approach for comparisons is novel and
entirely embedded in homomorphic encryption, while previous studies relied on
other techniques, such as SMPC, demanding high level of communication among
parties, and decryption of intermediate results from data-owners. Our protocol
is also provably safe (sharing the same safety as the homomorphic encryption
schemes), differently from other techniques such as
Order-Preserving/Revealing-Encryption (OPE/ORE).Comment: 14 pages, 1 figure, corrected typos, added introductory pedagogical
section on polynomial approximatio
On the Complexity of the Generalized MinRank Problem
We study the complexity of solving the \emph{generalized MinRank problem},
i.e. computing the set of points where the evaluation of a polynomial matrix
has rank at most . A natural algebraic representation of this problem gives
rise to a \emph{determinantal ideal}: the ideal generated by all minors of size
of the matrix. We give new complexity bounds for solving this problem
using Gr\"obner bases algorithms under genericity assumptions on the input
matrix. In particular, these complexity bounds allow us to identify families of
generalized MinRank problems for which the arithmetic complexity of the solving
process is polynomial in the number of solutions. We also provide an algorithm
to compute a rational parametrization of the variety of a 0-dimensional and
radical system of bi-degree . We show that its complexity can be bounded
by using the complexity bounds for the generalized MinRank problem.Comment: 29 page
Point compression for the trace zero subgroup over a small degree extension field
Using Semaev's summation polynomials, we derive a new equation for the
-rational points of the trace zero variety of an elliptic curve
defined over . Using this equation, we produce an optimal-size
representation for such points. Our representation is compatible with scalar
multiplication. We give a point compression algorithm to compute the
representation and a decompression algorithm to recover the original point (up
to some small ambiguity). The algorithms are efficient for trace zero varieties
coming from small degree extension fields. We give explicit equations and
discuss in detail the practically relevant cases of cubic and quintic field
extensions.Comment: 23 pages, to appear in Designs, Codes and Cryptograph
Interactive certificate for the verification of Wiedemann's Krylov sequence: application to the certification of the determinant, the minimal and the characteristic polynomials of sparse matrices
Certificates to a linear algebra computation are additional data structures
for each output, which can be used by a-possibly randomized- verification
algorithm that proves the correctness of each output. Wiede-mann's algorithm
projects the Krylov sequence obtained by repeatedly multiplying a vector by a
matrix to obtain a linearly recurrent sequence. The minimal polynomial of this
sequence divides the minimal polynomial of the matrix. For instance, if the
input matrix is sparse with n 1+o(1) non-zero entries, the
computation of the sequence is quadratic in the dimension of the matrix while
the computation of the minimal polynomial is n 1+o(1), once that projected
Krylov sequence is obtained. In this paper we give algorithms that compute
certificates for the Krylov sequence of sparse or structured
matrices over an abstract field, whose Monte Carlo verification complexity can
be made essentially linear. As an application this gives certificates for the
determinant, the minimal and characteristic polynomials of sparse or structured
matrices at the same cost
EVALUATION OF CRYPTOGRAPHIC ALGORITHMS
This article represents a synthesis of the evaluation methods for cryptographic algorithms and of their efficiency within practical applications. It approaches also the main operations carried out in cryptanalysis and the main categories and methods of attack in order to clarify the differences between evaluation concept and crypto algorithm cracking.cryptology, cryptanalysis, evaluation and cracking cryptographic algorithms
Foundations, Properties, and Security Applications of Puzzles: A Survey
Cryptographic algorithms have been used not only to create robust ciphertexts
but also to generate cryptograms that, contrary to the classic goal of
cryptography, are meant to be broken. These cryptograms, generally called
puzzles, require the use of a certain amount of resources to be solved, hence
introducing a cost that is often regarded as a time delay---though it could
involve other metrics as well, such as bandwidth. These powerful features have
made puzzles the core of many security protocols, acquiring increasing
importance in the IT security landscape. The concept of a puzzle has
subsequently been extended to other types of schemes that do not use
cryptographic functions, such as CAPTCHAs, which are used to discriminate
humans from machines. Overall, puzzles have experienced a renewed interest with
the advent of Bitcoin, which uses a CPU-intensive puzzle as proof of work. In
this paper, we provide a comprehensive study of the most important puzzle
construction schemes available in the literature, categorizing them according
to several attributes, such as resource type, verification type, and
applications. We have redefined the term puzzle by collecting and integrating
the scattered notions used in different works, to cover all the existing
applications. Moreover, we provide an overview of the possible applications,
identifying key requirements and different design approaches. Finally, we
highlight the features and limitations of each approach, providing a useful
guide for the future development of new puzzle schemes.Comment: This article has been accepted for publication in ACM Computing
Survey
- …