3,235 research outputs found
Successive Integer-Forcing and its Sum-Rate Optimality
Integer-forcing receivers generalize traditional linear receivers for the
multiple-input multiple-output channel by decoding integer-linear combinations
of the transmitted streams, rather then the streams themselves. Previous works
have shown that the additional degree of freedom in choosing the integer
coefficients enables this receiver to approach the performance of
maximum-likelihood decoding in various scenarios. Nonetheless, even for the
optimal choice of integer coefficients, the additive noise at the equalizer's
output is still correlated. In this work we study a variant of integer-forcing,
termed successive integer-forcing, that exploits these noise correlations to
improve performance. This scheme is the integer-forcing counterpart of
successive interference cancellation for traditional linear receivers.
Similarly to the latter, we show that successive integer-forcing is capacity
achieving when it is possible to optimize the rate allocation to the different
streams. In comparison to standard successive interference cancellation
receivers, the successive integer-forcing receiver offers more possibilities
for capacity achieving rate tuples, and in particular, ones that are more
balanced.Comment: A shorter version was submitted to the 51st Allerton Conferenc
The Computational Complexity of Generating Random Fractals
In this paper we examine a number of models that generate random fractals.
The models are studied using the tools of computational complexity theory from
the perspective of parallel computation. Diffusion limited aggregation and
several widely used algorithms for equilibrating the Ising model are shown to
be highly sequential; it is unlikely they can be simulated efficiently in
parallel. This is in contrast to Mandelbrot percolation that can be simulated
in constant parallel time. Our research helps shed light on the intrinsic
complexity of these models relative to each other and to different growth
processes that have been recently studied using complexity theory. In addition,
the results may serve as a guide to simulation physics.Comment: 28 pages, LATEX, 8 Postscript figures available from
[email protected]
PPP-Completeness with Connections to Cryptography
Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with
profound connections to the complexity of the fundamental cryptographic
primitives: collision-resistant hash functions and one-way permutations. In
contrast to most of the other subclasses of TFNP, no complete problem is known
for PPP. Our work identifies the first PPP-complete problem without any circuit
or Turing Machine given explicitly in the input, and thus we answer a
longstanding open question from [Papadimitriou1994]. Specifically, we show that
constrained-SIS (cSIS), a generalized version of the well-known Short Integer
Solution problem (SIS) from lattice-based cryptography, is PPP-complete.
In order to give intuition behind our reduction for constrained-SIS, we
identify another PPP-complete problem with a circuit in the input but closely
related to lattice problems. We call this problem BLICHFELDT and it is the
computational problem associated with Blichfeldt's fundamental theorem in the
theory of lattices.
Building on the inherent connection of PPP with collision-resistant hash
functions, we use our completeness result to construct the first natural hash
function family that captures the hardness of all collision-resistant hash
functions in a worst-case sense, i.e. it is natural and universal in the
worst-case. The close resemblance of our hash function family with SIS, leads
us to the first candidate collision-resistant hash function that is both
natural and universal in an average-case sense.
Finally, our results enrich our understanding of the connections between PPP,
lattice problems and other concrete cryptographic assumptions, such as the
discrete logarithm problem over general groups
- …