1,937 research outputs found
Successive Integer-Forcing and its Sum-Rate Optimality
Integer-forcing receivers generalize traditional linear receivers for the
multiple-input multiple-output channel by decoding integer-linear combinations
of the transmitted streams, rather then the streams themselves. Previous works
have shown that the additional degree of freedom in choosing the integer
coefficients enables this receiver to approach the performance of
maximum-likelihood decoding in various scenarios. Nonetheless, even for the
optimal choice of integer coefficients, the additive noise at the equalizer's
output is still correlated. In this work we study a variant of integer-forcing,
termed successive integer-forcing, that exploits these noise correlations to
improve performance. This scheme is the integer-forcing counterpart of
successive interference cancellation for traditional linear receivers.
Similarly to the latter, we show that successive integer-forcing is capacity
achieving when it is possible to optimize the rate allocation to the different
streams. In comparison to standard successive interference cancellation
receivers, the successive integer-forcing receiver offers more possibilities
for capacity achieving rate tuples, and in particular, ones that are more
balanced.Comment: A shorter version was submitted to the 51st Allerton Conferenc
Multiscale Representations for Manifold-Valued Data
We describe multiscale representations for data observed on equispaced grids and taking values in manifolds such as the sphere , the special orthogonal group , the positive definite matrices , and the Grassmann manifolds . The representations are based on the deployment of Deslauriers--Dubuc and average-interpolating pyramids "in the tangent plane" of such manifolds, using the and maps of those manifolds. The representations provide "wavelet coefficients" which can be thresholded, quantized, and scaled in much the same way as traditional wavelet coefficients. Tasks such as compression, noise removal, contrast enhancement, and stochastic simulation are facilitated by this representation. The approach applies to general manifolds but is particularly suited to the manifolds we consider, i.e., Riemannian symmetric spaces, such as , , , where the and maps are effectively computable. Applications to manifold-valued data sources of a geometric nature (motion, orientation, diffusion) seem particularly immediate. A software toolbox, SymmLab, can reproduce the results discussed in this paper
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
Integer-Forcing Linear Receivers
Linear receivers are often used to reduce the implementation complexity of
multiple-antenna systems. In a traditional linear receiver architecture, the
receive antennas are used to separate out the codewords sent by each transmit
antenna, which can then be decoded individually. Although easy to implement,
this approach can be highly suboptimal when the channel matrix is near
singular. This paper develops a new linear receiver architecture that uses the
receive antennas to create an effective channel matrix with integer-valued
entries. Rather than attempting to recover transmitted codewords directly, the
decoder recovers integer combinations of the codewords according to the entries
of the effective channel matrix. The codewords are all generated using the same
linear code which guarantees that these integer combinations are themselves
codewords. Provided that the effective channel is full rank, these integer
combinations can then be digitally solved for the original codewords. This
paper focuses on the special case where there is no coding across transmit
antennas and no channel state information at the transmitter(s), which
corresponds either to a multi-user uplink scenario or to single-user V-BLAST
encoding. In this setting, the proposed integer-forcing linear receiver
significantly outperforms conventional linear architectures such as the
zero-forcing and linear MMSE receiver. In the high SNR regime, the proposed
receiver attains the optimal diversity-multiplexing tradeoff for the standard
MIMO channel with no coding across transmit antennas. It is further shown that
in an extended MIMO model with interference, the integer-forcing linear
receiver achieves the optimal generalized degrees-of-freedom.Comment: 40 pages, 16 figures, to appear in the IEEE Transactions on
Information Theor
Imaging via Compressive Sampling [Introduction to compressive sampling and recovery via convex programming]
There is an extensive body of literature on image compression, but the central concept is straightforward: we transform the image into an appropriate basis and then code only the important expansion coefficients. The crux is finding a good transform, a problem that has been studied extensively from both a theoretical [14] and practical [25] standpoint. The most notable product of this research is the wavelet transform [9], [16]; switching from sinusoid-based representations to wavelets marked a watershed in image compression and is the essential difference between the classical JPEG [18] and modern JPEG-2000 [22] standards.
Image compression algorithms convert high-resolution images into a relatively small bit streams (while keeping the essential features intact), in effect turning a large digital data set into a substantially smaller one. But is there a way to avoid the large digital data set to begin with? Is there a way we can build the data compression directly into the acquisition? The answer is yes, and is what compressive sampling (CS) is all about
- …