81,027 research outputs found
Large System Analysis of Power Normalization Techniques in Massive MIMO
Linear precoding has been widely studied in the context of Massive
multiple-input-multiple-output (MIMO) together with two common power
normalization techniques, namely, matrix normalization (MN) and vector
normalization (VN). Despite this, their effect on the performance of Massive
MIMO systems has not been thoroughly studied yet. The aim of this paper is to
fulfill this gap by using large system analysis. Considering a system model
that accounts for channel estimation, pilot contamination, arbitrary pathloss,
and per-user channel correlation, we compute tight approximations for the
signal-to-interference-plus-noise ratio and the rate of each user equipment in
the system while employing maximum ratio transmission (MRT), zero forcing (ZF),
and regularized ZF precoding under both MN and VN techniques. Such
approximations are used to analytically reveal how the choice of power
normalization affects the performance of MRT and ZF under uncorrelated fading
channels. It turns out that ZF with VN resembles a sum rate maximizer while it
provides a notion of fairness under MN. Numerical results are used to validate
the accuracy of the asymptotic analysis and to show that in Massive MIMO,
non-coherent interference and noise, rather than pilot contamination, are often
the major limiting factors of the considered precoding schemes.Comment: 12 pages, 3 figures, Accepted for publication in the IEEE
Transactions on Vehicular Technolog
Noise suppressing sensor encoding and neural signal orthonormalization
In this paper we regard first the situation where parallel channels are disturbed by noise. With the goal of maximal information conservation we deduce the conditions for a transform which "immunizes" the channels against noise influence before the signals are used in later operations. It shows up that the signals have to be decorrelated and normalized by the filter which corresponds for the case of one channel to the classical result of Shannon. Additional simulations for image encoding and decoding show that this constitutes an efficient approach for noise suppression. Furthermore, by a corresponding objective function we deduce the stochastic and deterministic learning rules for a neural network that implements the data orthonormalization. In comparison with other already existing normalization networks our network shows approximately the same in the stochastic case but, by its generic deduction ensures the convergence and enables the use as independent building block in other contexts, e.g. whitening for independent component analysis. Keywords: information conservation, whitening filter, data orthonormalization network, image encoding, noise suppression
Quantum Channels and Representation Theory
In the study of d-dimensional quantum channels , an assumption
which is not very restrictive, and which has a natural physical interpretation,
is that the corresponding Kraus operators form a representation of a Lie
algebra. Physically, this is a symmetry algebra for the interaction
Hamiltonian. This paper begins a systematic study of channels defined by
representations; the famous Werner-Holevo channel is one element of this
infinite class. We show that the channel derived from the defining
representation of SU(n) is a depolarizing channel for all , but for most
other representations this is not the case. Since the Bloch sphere is not
appropriate here, we develop technology which is a generalization of Bloch's
technique. Our method works by representing the density matrix as a polynomial
in symmetrized products of Lie algebra generators, with coefficients that are
symmetric tensors. Using these tensor methods we prove eleven theorems, derive
many explicit formulas and show other interesting properties of quantum
channels in various dimensions, with various Lie symmetry algebras. We also
derive numerical estimates on the size of a generalized ``Bloch sphere'' for
certain channels. There remain many open questions which are indicated at
various points through the paper.Comment: 28 pages, 1 figur
Linking Image and Text with 2-Way Nets
Linking two data sources is a basic building block in numerous computer
vision problems. Canonical Correlation Analysis (CCA) achieves this by
utilizing a linear optimizer in order to maximize the correlation between the
two views. Recent work makes use of non-linear models, including deep learning
techniques, that optimize the CCA loss in some feature space. In this paper, we
introduce a novel, bi-directional neural network architecture for the task of
matching vectors from two data sources. Our approach employs two tied neural
network channels that project the two views into a common, maximally correlated
space using the Euclidean loss. We show a direct link between the
correlation-based loss and Euclidean loss, enabling the use of Euclidean loss
for correlation maximization. To overcome common Euclidean regression
optimization problems, we modify well-known techniques to our problem,
including batch normalization and dropout. We show state of the art results on
a number of computer vision matching tasks including MNIST image matching and
sentence-image matching on the Flickr8k, Flickr30k and COCO datasets.Comment: 14 pages, 2 figures, 6 table
- …