708 research outputs found
On the Complexity of Solving Quadratic Boolean Systems
A fundamental problem in computer science is to find all the common zeroes of
quadratic polynomials in unknowns over . The
cryptanalysis of several modern ciphers reduces to this problem. Up to now, the
best complexity bound was reached by an exhaustive search in
operations. We give an algorithm that reduces the problem to a combination of
exhaustive search and sparse linear algebra. This algorithm has several
variants depending on the method used for the linear algebra step. Under
precise algebraic assumptions on the input system, we show that the
deterministic variant of our algorithm has complexity bounded by
when , while a probabilistic variant of the Las Vegas type
has expected complexity . Experiments on random systems show
that the algebraic assumptions are satisfied with probability very close to~1.
We also give a rough estimate for the actual threshold between our method and
exhaustive search, which is as low as~200, and thus very relevant for
cryptographic applications.Comment: 25 page
Fast Quantum Algorithm for Solving Multivariate Quadratic Equations
In August 2015 the cryptographic world was shaken by a sudden and surprising
announcement by the US National Security Agency NSA concerning plans to
transition to post-quantum algorithms. Since this announcement post-quantum
cryptography has become a topic of primary interest for several standardization
bodies. The transition from the currently deployed public-key algorithms to
post-quantum algorithms has been found to be challenging in many aspects. In
particular the problem of evaluating the quantum-bit security of such
post-quantum cryptosystems remains vastly open. Of course this question is of
primarily concern in the process of standardizing the post-quantum
cryptosystems. In this paper we consider the quantum security of the problem of
solving a system of {\it Boolean multivariate quadratic equations in
variables} (\MQb); a central problem in post-quantum cryptography. When ,
under a natural algebraic assumption, we present a Las-Vegas quantum algorithm
solving \MQb{} that requires the evaluation of, on average,
quantum gates. To our knowledge this is the fastest algorithm for solving
\MQb{}
Linear Time Interactive Certificates for the Minimal Polynomial and the Determinant of a Sparse Matrix
International audienceComputational problem certificates are additional data structures for each output, which can be used by a—possibly randomized—verification algorithm that proves the correctness of each output. In this paper, we give an algorithm that computes a certificate for the minimal polynomial of sparse or structured n×n matrices over an abstract field, of sufficiently large cardinality, whose Monte Carlo verification complexity requires a single matrix-vector multiplication and a linear number of extra field operations. We also propose a novel preconditioner that ensures irreducibility of the characteristic polynomial of the generically preconditioned matrix. This preconditioner takes linear time to be applied and uses only two random entries. We then combine these two techniques to give algorithms that compute certificates for the determinant, and thus for the characteristic polynomial, whose Monte Carlo verification complexity is therefore also linear
Computational linear algebra over finite fields
We present here algorithms for efficient computation of linear algebra
problems over finite fields
A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints
We propose a new algorithm to solve optimization problems of the form for a smooth function under the constraints that is positive
semidefinite and the diagonal blocks of are small identity matrices. Such
problems often arise as the result of relaxing a rank constraint (lifting). In
particular, many estimation tasks involving phases, rotations, orthonormal
bases or permutations fit in this framework, and so do certain relaxations of
combinatorial problems such as Max-Cut. The proposed algorithm exploits the
facts that (1) such formulations admit low-rank solutions, and (2) their
rank-restricted versions are smooth optimization problems on a Riemannian
manifold. Combining insights from both the Riemannian and the convex geometries
of the problem, we characterize when second-order critical points of the smooth
problem reveal KKT points of the semidefinite problem. We compare against state
of the art, mature software and find that, on certain interesting problem
instances, what we call the staircase method is orders of magnitude faster, is
more accurate and scales better. Code is available.Comment: 37 pages, 3 figure
Operationalizing Individual Fairness with Pairwise Fair Representations
We revisit the notion of individual fairness proposed by Dwork et al. A
central challenge in operationalizing their approach is the difficulty in
eliciting a human specification of a similarity metric. In this paper, we
propose an operationalization of individual fairness that does not rely on a
human specification of a distance metric. Instead, we propose novel approaches
to elicit and leverage side-information on equally deserving individuals to
counter subordination between social groups. We model this knowledge as a
fairness graph, and learn a unified Pairwise Fair Representation (PFR) of the
data that captures both data-driven similarity between individuals and the
pairwise side-information in fairness graph. We elicit fairness judgments from
a variety of sources, including human judgments for two real-world datasets on
recidivism prediction (COMPAS) and violent neighborhood prediction (Crime &
Communities). Our experiments show that the PFR model for operationalizing
individual fairness is practically viable.Comment: To be published in the proceedings of the VLDB Endowment, Vol. 13,
Issue.
Logic-Based Explainability in Machine Learning
The last decade witnessed an ever-increasing stream of successes in Machine
Learning (ML). These successes offer clear evidence that ML is bound to become
pervasive in a wide range of practical uses, including many that directly
affect humans. Unfortunately, the operation of the most successful ML models is
incomprehensible for human decision makers. As a result, the use of ML models,
especially in high-risk and safety-critical settings is not without concern. In
recent years, there have been efforts on devising approaches for explaining ML
models. Most of these efforts have focused on so-called model-agnostic
approaches. However, all model-agnostic and related approaches offer no
guarantees of rigor, hence being referred to as non-formal. For example, such
non-formal explanations can be consistent with different predictions, which
renders them useless in practice. This paper overviews the ongoing research
efforts on computing rigorous model-based explanations of ML models; these
being referred to as formal explanations. These efforts encompass a variety of
topics, that include the actual definitions of explanations, the
characterization of the complexity of computing explanations, the currently
best logical encodings for reasoning about different ML models, and also how to
make explanations interpretable for human decision makers, among others
- …