95,051 research outputs found
Cohomology in Grothendieck Topologies and Lower Bounds in Boolean Complexity
This paper is motivated by questions such as P vs. NP and other questions in
Boolean complexity theory. We describe an approach to attacking such questions
with cohomology, and we show that using Grothendieck topologies and other ideas
from the Grothendieck school gives new hope for such an attack.
We focus on circuit depth complexity, and consider only finite topological
spaces or Grothendieck topologies based on finite categories; as such, we do
not use algebraic geometry or manifolds.
Given two sheaves on a Grothendieck topology, their "cohomological
complexity" is the sum of the dimensions of their Ext groups. We seek to model
the depth complexity of Boolean functions by the cohomological complexity of
sheaves on a Grothendieck topology. We propose that the logical AND of two
Boolean functions will have its corresponding cohomological complexity bounded
in terms of those of the two functions using ``virtual zero extensions.'' We
propose that the logical negation of a function will have its corresponding
cohomological complexity equal to that of the original function using duality
theory. We explain these approaches and show that they are stable under
pullbacks and base change. It is the subject of ongoing work to achieve AND and
negation bounds simultaneously in a way that yields an interesting depth lower
bound.Comment: 70 pages, abstract corrected and modifie
New algorithms for decoding in the rank metric and an attack on the LRPC cryptosystem
We consider the decoding problem or the problem of finding low weight
codewords for rank metric codes. We show how additional information about the
codeword we want to find under the form of certain linear combinations of the
entries of the codeword leads to algorithms with a better complexity. This is
then used together with a folding technique for attacking a McEliece scheme
based on LRPC codes. It leads to a feasible attack on one of the parameters
suggested in \cite{GMRZ13}.Comment: A shortened version of this paper will be published in the
proceedings of the IEEE International Symposium on Information Theory 2015
(ISIT 2015
Learning with Errors is easy with quantum samples
Learning with Errors is one of the fundamental problems in computational
learning theory and has in the last years become the cornerstone of
post-quantum cryptography. In this work, we study the quantum sample complexity
of Learning with Errors and show that there exists an efficient quantum
learning algorithm (with polynomial sample and time complexity) for the
Learning with Errors problem where the error distribution is the one used in
cryptography. While our quantum learning algorithm does not break the LWE-based
encryption schemes proposed in the cryptography literature, it does have some
interesting implications for cryptography: first, when building an LWE-based
scheme, one needs to be careful about the access to the public-key generation
algorithm that is given to the adversary; second, our algorithm shows a
possible way for attacking LWE-based encryption by using classical samples to
approximate the quantum sample state, since then using our quantum learning
algorithm would solve LWE
Constraint Complexity of Realizations of Linear Codes on Arbitrary Graphs
A graphical realization of a linear code C consists of an assignment of the
coordinates of C to the vertices of a graph, along with a specification of
linear state spaces and linear ``local constraint'' codes to be associated with
the edges and vertices, respectively, of the graph. The \k-complexity of a
graphical realization is defined to be the largest dimension of any of its
local constraint codes. \k-complexity is a reasonable measure of the
computational complexity of a sum-product decoding algorithm specified by a
graphical realization. The main focus of this paper is on the following
problem: given a linear code C and a graph G, how small can the \k-complexity
of a realization of C on G be? As useful tools for attacking this problem, we
introduce the Vertex-Cut Bound, and the notion of ``vc-treewidth'' for a graph,
which is closely related to the well-known graph-theoretic notion of treewidth.
Using these tools, we derive tight lower bounds on the \k-complexity of any
realization of C on G. Our bounds enable us to conclude that good
error-correcting codes can have low-complexity realizations only on graphs with
large vc-treewidth. Along the way, we also prove the interesting result that
the ratio of the \k-complexity of the best conventional trellis realization
of a length-n code C to the \k-complexity of the best cycle-free realization
of C grows at most logarithmically with codelength n. Such a logarithmic growth
rate is, in fact, achievable.Comment: Submitted to IEEE Transactions on Information Theor
Complexity of Chess Domination Problems
We study different domination problems of attacking and non-attacking rooks
and queens on polyominoes and polycubes of all dimensions. Our main result
proves that maximal domination is NP-complete for non-attacking queens and for
non-attacking rooks on polycubes of dimension three and higher. We also analyse
these problems for polyominoes and convex polyominoes, conjecture the
complexity classes and provide a computer tool for investigation. We have also
computed new values for classical queen domination problems on chessboards
(square polyominoes). For our computations, we have translated the problem into
an integer linear programming instance. Finally, using this computational
implementation and the game engine Godot, we have developed a video game of
minimal domination of queens and rooks on randomly generated polyominoes.Comment: 19 pages, 20 figures, 4 tables. Theorem 1 now for d>2, added results
on approximation, fixed typos, reorganised some proof
A simple hybrid algorithm for improving team sport AI
In the very popular genre of team sports games defeating the opposing AI is the main focus of the gameplay experience. However the overall quality of these games is significantly damaged because, in a lot of cases, the opposition is prone to mistakes or vulnerable to exploitation. This paper introduces an AI system which overcomes this failing through the addition of simple adaptive learning and prediction algorithms to a basic ice hockey defence. The paper shows that improvements can be made to the gameplay experience without overly increasing the implementation complexity of the system or negatively affecting its performance. The created defensive system detects patterns in the offensive tactics used against it and changes elements of its reaction accordingly; effectively adapting to attempted exploitation of repeated tactics. This is achieved using a fuzzy inference system that tracks player movement, which greatly improves variation of defender positioning, alongside an N-gram pattern recognition-based algorithm that predicts the next action of the attacking player. Analysis of implementation complexity and execution overhead shows that these techniques are not prohibitively expensive in either respect, and are therefore appropriate for use in games
A New Algorithm for Solving Ring-LPN with a Reducible Polynomial
The LPN (Learning Parity with Noise) problem has recently proved to be of
great importance in cryptology. A special and very useful case is the RING-LPN
problem, which typically provides improved efficiency in the constructed
cryptographic primitive. We present a new algorithm for solving the RING-LPN
problem in the case when the polynomial used is reducible. It greatly
outperforms previous algorithms for solving this problem. Using the algorithm,
we can break the Lapin authentication protocol for the proposed instance using
a reducible polynomial, in about 2^70 bit operations
- …