8,089 research outputs found
Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials
We consider the problem of representing Boolean functions exactly by "sparse"
linear combinations (over ) of functions from some "simple" class
. In particular, given we are interested in finding
low-complexity functions lacking sparse representations. When is the
set of PARITY functions or the set of conjunctions, this sort of problem has a
well-understood answer, the problem becomes interesting when is
"overcomplete" and the set of functions is not linearly independent. We focus
on the cases where is the set of linear threshold functions, the set
of rectified linear units (ReLUs), and the set of low-degree polynomials over a
finite field, all of which are well-studied in different contexts.
We provide generic tools for proving lower bounds on representations of this
kind. Applying these, we give several new lower bounds for "semi-explicit"
Boolean functions. For example, we show there are functions in nondeterministic
quasi-polynomial time that require super-polynomial size:
Depth-two neural networks with sign activation function, a special
case of depth-two threshold circuit lower bounds.
Depth-two neural networks with ReLU activation function.
-linear combinations of -degree
-polynomials, for every prime (related to problems regarding
Higher-Order "Uncertainty Principles"). We also obtain a function in
requiring linear combinations.
-linear combinations of circuits of
polynomial size (further generalizing the recent lower bounds of Murray and the
author).
(The above is a shortened abstract. For the full abstract, see the paper.
Eliminating Variables in Boolean Equation Systems
Systems of Boolean equations of low degree arise in a natural way when
analyzing block ciphers. The cipher's round functions relate the secret key to
auxiliary variables that are introduced by each successive round. In algebraic
cryptanalysis, the attacker attempts to solve the resulting equation system in
order to extract the secret key. In this paper we study algorithms for
eliminating the auxiliary variables from these systems of Boolean equations. It
is known that elimination of variables in general increases the degree of the
equations involved. In order to contain computational complexity and storage
complexity, we present two new algorithms for performing elimination while
bounding the degree at , which is the lowest possible for elimination.
Further we show that the new algorithms are related to the well known \emph{XL}
algorithm. We apply the algorithms to a downscaled version of the LowMC cipher
and to a toy cipher based on the Prince cipher, and report on experimental
results pertaining to these examples.Comment: 21 pages, 3 figures, Journal pape
An iterative approach for counting reduced ordered binary decision diagrams
For three decades binary decision diagrams, a data structure efficiently
representing Boolean functions, have been widely used in many distinct contexts
like model verification, machine learning, cryptography and also resolution of
combinatorial problems. The most famous variant, called reduced ordered binary
decision diagram (ROBDD for short), can be viewed as the result of a compaction
procedure on the full decision tree. A useful property is that once an order
over the Boolean variables is fixed, each Boolean function is represented by
exactly one ROBDD. In this paper we aim at computing the exact distribution of
the Boolean functions in variables according to the ROBDD size}, where the
ROBDD size is equal to the number of decision nodes of the underlying directed
acyclic graph (DAG for short) structure. Recall the number of Boolean functions
with variables is equal to , which is of double exponential growth
with respect to the number of variables. The maximal size of a ROBDD with
variables is . Apart from the natural combinatorial
explosion observed, another difficulty for computing the distribution according
to size is to take into account dependencies within the DAG structure of
ROBDDs. In this paper, we develop the first polynomial algorithm to derive the
distribution of Boolean functions over variables with respect to ROBDD size
denoted by . The algorithm computes the (enumerative) generating function of
ROBDDs with variables up to size . It performs arithmetical
operations on integers and necessitates storing integers with
bit length . Our new approach relies on a decomposition of ROBDDs
layer by layer and on an inclusion-exclusion argument
A variation on bisecting the binomial coefficients
In this paper, we present an algorithm which allows us to search for all the
bisections for the binomial coefficients and
include a table with the results for all . Connections with previous
work on this topic is included. We conjecture that the probability of having
only trivial solutions is . \end{abstract}Comment: 14 pages, four tables, two figure
On a connection between the switching separability of a graph and that of its subgraphs
A graph of order is called {switching separable} if its modulo-2 sum
with some complete bipartite graph on the same set of vertices is divided into
two mutually independent subgraphs, each having at least two vertices. We prove
the following: if removing any one or two vertices of a graph always results in
a switching separable subgraph, then the graph itself is switching separable.
On the other hand, for every odd order greater than 4, there is a graph that is
not switching separable, but removing any vertex always results in a switching
separable subgraph. We show a connection with similar facts on the separability
of Boolean functions and reducibility of -ary quasigroups. Keywords:
two-graph, reducibility, separability, graph switching, Seidel switching, graph
connectivity, -ary quasigroupComment: english: 9 pages; russian: 9 page
On the Complexity of Solving Quadratic Boolean Systems
A fundamental problem in computer science is to find all the common zeroes of
quadratic polynomials in unknowns over . The
cryptanalysis of several modern ciphers reduces to this problem. Up to now, the
best complexity bound was reached by an exhaustive search in
operations. We give an algorithm that reduces the problem to a combination of
exhaustive search and sparse linear algebra. This algorithm has several
variants depending on the method used for the linear algebra step. Under
precise algebraic assumptions on the input system, we show that the
deterministic variant of our algorithm has complexity bounded by
when , while a probabilistic variant of the Las Vegas type
has expected complexity . Experiments on random systems show
that the algebraic assumptions are satisfied with probability very close to~1.
We also give a rough estimate for the actual threshold between our method and
exhaustive search, which is as low as~200, and thus very relevant for
cryptographic applications.Comment: 25 page
- …