15,388 research outputs found
Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations
This paper establishes information-theoretic limits in estimating a finite
field low-rank matrix given random linear measurements of it. These linear
measurements are obtained by taking inner products of the low-rank matrix with
random sensing matrices. Necessary and sufficient conditions on the number of
measurements required are provided. It is shown that these conditions are sharp
and the minimum-rank decoder is asymptotically optimal. The reliability
function of this decoder is also derived by appealing to de Caen's lower bound
on the probability of a union. The sufficient condition also holds when the
sensing matrices are sparse - a scenario that may be amenable to efficient
decoding. More precisely, it is shown that if the n\times n-sensing matrices
contain, on average, \Omega(nlog n) entries, the number of measurements
required is the same as that when the sensing matrices are dense and contain
entries drawn uniformly at random from the field. Analogies are drawn between
the above results and rank-metric codes in the coding theory literature. In
fact, we are also strongly motivated by understanding when minimum rank
distance decoding of random rank-metric codes succeeds. To this end, we derive
distance properties of equiprobable and sparse rank-metric codes. These
distance properties provide a precise geometric interpretation of the fact that
the sparse ensemble requires as few measurements as the dense one. Finally, we
provide a non-exhaustive procedure to search for the unknown low-rank matrix.Comment: Accepted to the IEEE Transactions on Information Theory; Presented at
IEEE International Symposium on Information Theory (ISIT) 201
Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting
In this paper we establish links between, and new results for, three problems
that are not usually considered together. The first is a matrix decomposition
problem that arises in areas such as statistical modeling and signal
processing: given a matrix formed as the sum of an unknown diagonal matrix
and an unknown low rank positive semidefinite matrix, decompose into these
constituents. The second problem we consider is to determine the facial
structure of the set of correlation matrices, a convex set also known as the
elliptope. This convex body, and particularly its facial structure, plays a
role in applications from combinatorial optimization to mathematical finance.
The third problem is a basic geometric question: given points
(where ) determine whether there is a centered
ellipsoid passing \emph{exactly} through all of the points.
We show that in a precise sense these three problems are equivalent.
Furthermore we establish a simple sufficient condition on a subspace that
ensures any positive semidefinite matrix with column space can be
recovered from for any diagonal matrix using a convex
optimization-based heuristic known as minimum trace factor analysis. This
result leads to a new understanding of the structure of rank-deficient
correlation matrices and a simple condition on a set of points that ensures
there is a centered ellipsoid passing through them.Comment: 20 page
Flexible Memory Networks
Networks of neurons in some brain areas are flexible enough to encode new
memories quickly. Using a standard firing rate model of recurrent networks, we
develop a theory of flexible memory networks. Our main results characterize
networks having the maximal number of flexible memory patterns, given a
constraint graph on the network's connectivity matrix. Modulo a mild
topological condition, we find a close connection between maximally flexible
networks and rank 1 matrices. The topological condition is H_1(X;Z)=0, where X
is the clique complex associated to the network's constraint graph; this
condition is generically satisfied for large random networks that are not
overly sparse. In order to prove our main results, we develop some
matrix-theoretic tools and present them in a self-contained section independent
of the neuroscience context.Comment: Accepted to Bulletin of Mathematical Biology, 11 July 201
Almost Lossless Analog Compression without Phase Information
We propose an information-theoretic framework for phase retrieval.
Specifically, we consider the problem of recovering an unknown n-dimensional
vector x up to an overall sign factor from m=Rn phaseless measurements with
compression rate R and derive a general achievability bound for R.
Surprisingly, it turns out that this bound on the compression rate is the same
as the one for almost lossless analog compression obtained by Wu and Verd\'u
(2010): Phaseless linear measurements are as good as linear measurements with
full phase information in the sense that ignoring the sign of m measurements
only leaves us with an ambiguity with respect to an overall sign factor of x
Improving compressed sensing with the diamond norm
In low-rank matrix recovery, one aims to reconstruct a low-rank matrix from a
minimal number of linear measurements. Within the paradigm of compressed
sensing, this is made computationally efficient by minimizing the nuclear norm
as a convex surrogate for rank.
In this work, we identify an improved regularizer based on the so-called
diamond norm, a concept imported from quantum information theory. We show that
-for a class of matrices saturating a certain norm inequality- the descent cone
of the diamond norm is contained in that of the nuclear norm. This suggests
superior reconstruction properties for these matrices. We explicitly
characterize this set of matrices. Moreover, we demonstrate numerically that
the diamond norm indeed outperforms the nuclear norm in a number of relevant
applications: These include signal analysis tasks such as blind matrix
deconvolution or the retrieval of certain unitary basis changes, as well as the
quantum information problem of process tomography with random measurements.
The diamond norm is defined for matrices that can be interpreted as order-4
tensors and it turns out that the above condition depends crucially on that
tensorial structure. In this sense, this work touches on an aspect of the
notoriously difficult tensor completion problem.Comment: 25 pages + Appendix, 7 Figures, published versio
- …