7,756 research outputs found
Tools for Quantum Algorithms
We present efficient implementations of a number of operations for quantum
computers. These include controlled phase adjustments of the amplitudes in a
superposition, permutations, approximations of transformations and
generalizations of the phase adjustments to block matrix transformations. These
operations generalize those used in proposed quantum search algorithms.Comment: LATEX, 15 pages, Minor changes: one author's e-mail and one reference
numbe
A linear time algorithm for the orbit problem over cyclic groups
The orbit problem is at the heart of symmetry reduction methods for model
checking concurrent systems. It asks whether two given configurations in a
concurrent system (represented as finite strings over some finite alphabet) are
in the same orbit with respect to a given finite permutation group (represented
by their generators) acting on this set of configurations by permuting indices.
It is known that the problem is in general as hard as the graph isomorphism
problem, whose precise complexity (whether it is solvable in polynomial-time)
is a long-standing open problem. In this paper, we consider the restriction of
the orbit problem when the permutation group is cyclic (i.e. generated by a
single permutation), an important restriction of the problem. It is known that
this subproblem is solvable in polynomial-time. Our main result is a
linear-time algorithm for this subproblem.Comment: Accepted in Acta Informatica in Nov 201
Finding Significant Fourier Coefficients: Clarifications, Simplifications, Applications and Limitations
Ideas from Fourier analysis have been used in cryptography for the last three
decades. Akavia, Goldwasser and Safra unified some of these ideas to give a
complete algorithm that finds significant Fourier coefficients of functions on
any finite abelian group. Their algorithm stimulated a lot of interest in the
cryptography community, especially in the context of `bit security'. This
manuscript attempts to be a friendly and comprehensive guide to the tools and
results in this field. The intended readership is cryptographers who have heard
about these tools and seek an understanding of their mechanics and their
usefulness and limitations. A compact overview of the algorithm is presented
with emphasis on the ideas behind it. We show how these ideas can be extended
to a `modulus-switching' variant of the algorithm. We survey some applications
of this algorithm, and explain that several results should be taken in the
right context. In particular, we point out that some of the most important bit
security problems are still open. Our original contributions include: a
discussion of the limitations on the usefulness of these tools; an answer to an
open question about the modular inversion hidden number problem
Optimal Sparsification for Some Binary CSPs Using Low-degree Polynomials
This paper analyzes to what extent it is possible to efficiently reduce the
number of clauses in NP-hard satisfiability problems, without changing the
answer. Upper and lower bounds are established using the concept of
kernelization. Existing results show that if NP is not contained in coNP/poly,
no efficient preprocessing algorithm can reduce n-variable instances of CNF-SAT
with d literals per clause, to equivalent instances with bits for
any e > 0. For the Not-All-Equal SAT problem, a compression to size
exists. We put these results in a common framework by analyzing
the compressibility of binary CSPs. We characterize constraint types based on
the minimum degree of multivariate polynomials whose roots correspond to the
satisfying assignments, obtaining (nearly) matching upper and lower bounds in
several settings. Our lower bounds show that not just the number of
constraints, but also the encoding size of individual constraints plays an
important role. For example, for Exact Satisfiability with unbounded clause
length it is possible to efficiently reduce the number of constraints to n+1,
yet no polynomial-time algorithm can reduce to an equivalent instance with
bits for any e > 0, unless NP is a subset of coNP/poly.Comment: Updated the cross-composition in lemma 18 (minor update), since the
previous version did NOT satisfy requirement 4 of lemma 18 (the proof of
Claim 20 was incorrect
On the complexity of solving linear congruences and computing nullspaces modulo a constant
We consider the problems of determining the feasibility of a linear
congruence, producing a solution to a linear congruence, and finding a spanning
set for the nullspace of an integer matrix, where each problem is considered
modulo an arbitrary constant k>1. These problems are known to be complete for
the logspace modular counting classes {Mod_k L} = {coMod_k L} in special case
that k is prime (Buntrock et al, 1992). By considering variants of standard
logspace function classes --- related to #L and functions computable by UL
machines, but which only characterize the number of accepting paths modulo k
--- we show that these problems of linear algebra are also complete for
{coMod_k L} for any constant k>1.
Our results are obtained by defining a class of functions FUL_k which are low
for {Mod_k L} and {coMod_k L} for k>1, using ideas similar to those used in the
case of k prime in (Buntrock et al, 1992) to show closure of Mod_k L under NC^1
reductions (including {Mod_k L} oracle reductions). In addition to the results
above, we briefly consider the relationship of the class FUL_k for arbitrary
moduli k to the class {F.coMod_k L} of functions whose output symbols are
verifiable by {coMod_k L} algorithms; and consider what consequences such a
comparison may have for oracle closure results of the form {Mod_k L}^{Mod_k L}
= {Mod_k L} for composite k.Comment: 17 pages, one Appendix; minor corrections and revisions to
presentation, new observations regarding the prospect of oracle closures.
Comments welcom
PPP-Completeness with Connections to Cryptography
Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with
profound connections to the complexity of the fundamental cryptographic
primitives: collision-resistant hash functions and one-way permutations. In
contrast to most of the other subclasses of TFNP, no complete problem is known
for PPP. Our work identifies the first PPP-complete problem without any circuit
or Turing Machine given explicitly in the input, and thus we answer a
longstanding open question from [Papadimitriou1994]. Specifically, we show that
constrained-SIS (cSIS), a generalized version of the well-known Short Integer
Solution problem (SIS) from lattice-based cryptography, is PPP-complete.
In order to give intuition behind our reduction for constrained-SIS, we
identify another PPP-complete problem with a circuit in the input but closely
related to lattice problems. We call this problem BLICHFELDT and it is the
computational problem associated with Blichfeldt's fundamental theorem in the
theory of lattices.
Building on the inherent connection of PPP with collision-resistant hash
functions, we use our completeness result to construct the first natural hash
function family that captures the hardness of all collision-resistant hash
functions in a worst-case sense, i.e. it is natural and universal in the
worst-case. The close resemblance of our hash function family with SIS, leads
us to the first candidate collision-resistant hash function that is both
natural and universal in an average-case sense.
Finally, our results enrich our understanding of the connections between PPP,
lattice problems and other concrete cryptographic assumptions, such as the
discrete logarithm problem over general groups
Integer-Forcing Source Coding
Integer-Forcing (IF) is a new framework, based on compute-and-forward, for
decoding multiple integer linear combinations from the output of a Gaussian
multiple-input multiple-output channel. This work applies the IF approach to
arrive at a new low-complexity scheme, IF source coding, for distributed lossy
compression of correlated Gaussian sources under a minimum mean squared error
distortion measure. All encoders use the same nested lattice codebook. Each
encoder quantizes its observation using the fine lattice as a quantizer and
reduces the result modulo the coarse lattice, which plays the role of binning.
Rather than directly recovering the individual quantized signals, the decoder
first recovers a full-rank set of judiciously chosen integer linear
combinations of the quantized signals, and then inverts it. In general, the
linear combinations have smaller average powers than the original signals. This
allows to increase the density of the coarse lattice, which in turn translates
to smaller compression rates. We also propose and analyze a one-shot version of
IF source coding, that is simple enough to potentially lead to a new design
principle for analog-to-digital converters that can exploit spatial
correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor
- …