32,564 research outputs found
Minimum ranks of sign patterns via sign vectors and duality
A {\it sign pattern matrix} is a matrix whose entries are from the set
. The minimum rank of a sign pattern matrix is the minimum of
the ranks of the real matrices whose entries have signs equal to the
corresponding entries of . It is shown in this paper that for any sign pattern with minimum rank , rational realization of the
minimum rank is possible. This is done using a new approach involving sign
vectors and duality. It is shown that for each integer , there exists
a nonnegative integer such that there exists an sign pattern
matrix with minimum rank for which rational realization is not possible.
A characterization of sign patterns with minimum rank is
given (which solves an open problem in Brualdi et al. \cite{Bru10}), along with
a more general description of sign patterns with minimum rank , in terms of
sign vectors of certain subspaces. A number of results on the maximum and
minimum numbers of sign vectors of -dimensional subspaces of
are obtained. In particular, it is shown that the maximum number of sign
vectors of -dimensional subspaces of is . Several
related open problems are stated along the way
Sign rank versus VC dimension
This work studies the maximum possible sign rank of sign
matrices with a given VC dimension . For , this maximum is {three}. For
, this maximum is . For , similar but
slightly less accurate statements hold. {The lower bounds improve over previous
ones by Ben-David et al., and the upper bounds are novel.}
The lower bounds are obtained by probabilistic constructions, using a theorem
of Warren in real algebraic topology. The upper bounds are obtained using a
result of Welzl about spanning trees with low stabbing number, and using the
moment curve.
The upper bound technique is also used to: (i) provide estimates on the
number of classes of a given VC dimension, and the number of maximum classes of
a given VC dimension -- answering a question of Frankl from '89, and (ii)
design an efficient algorithm that provides an multiplicative
approximation for the sign rank.
We also observe a general connection between sign rank and spectral gaps
which is based on Forster's argument. Consider the adjacency
matrix of a regular graph with a second eigenvalue of absolute value
and . We show that the sign rank of the signed
version of this matrix is at least . We use this connection to
prove the existence of a maximum class with VC
dimension and sign rank . This answers a question
of Ben-David et al.~regarding the sign rank of large VC classes. We also
describe limitations of this approach, in the spirit of the Alon-Boppana
theorem.
We further describe connections to communication complexity, geometry,
learning theory, and combinatorics.Comment: 33 pages. This is a revised version of the paper "Sign rank versus VC
dimension". Additional results in this version: (i) Estimates on the number
of maximum VC classes (answering a question of Frankl from '89). (ii)
Estimates on the sign rank of large VC classes (answering a question of
Ben-David et al. from '03). (iii) A discussion on the computational
complexity of computing the sign-ran
Recovery of Coherent Data via Low-Rank Dictionary Pursuit
The recently established RPCA method provides us a convenient way to restore
low-rank matrices from grossly corrupted observations. While elegant in theory
and powerful in reality, RPCA may be not an ultimate solution to the low-rank
matrix recovery problem. Indeed, its performance may not be perfect even when
data are strictly low-rank. This is because conventional RPCA ignores the
clustering structures of the data which are ubiquitous in modern applications.
As the number of cluster grows, the coherence of data keeps increasing, and
accordingly, the recovery performance of RPCA degrades. We show that the
challenges raised by coherent data (i.e., the data with high coherence) could
be alleviated by Low-Rank Representation (LRR), provided that the dictionary in
LRR is configured appropriately. More precisely, we mathematically prove that
if the dictionary itself is low-rank then LRR is immune to the coherence
parameter which increases with the underlying cluster number. This provides an
elementary principle for dealing with coherent data. Subsequently, we devise a
practical algorithm to obtain proper dictionaries in unsupervised environments.
Our extensive experiments on randomly generated matrices verify our claims
Minimum (maximum) rank of tensors and the sign nonsingular tensors
In this paper, we define the minimum (maximum) rank, term rank and the sign
nonsingular of tensors. The sufficiency and necessity for the minimum rank of a
real tensor to be is given. And we show that the maximum rank of a tensor
is not less than the term rank. We also prove that the minimum rank of a sign
nonsingular tensor is not less than the dimension of it. And we get some
characterizations of a tensor having sign left or sign right inverses
Sign patterns with minimum rank 3 and point-line configurations
A \emph{sign pattern (matrix)} is a matrix whose entries are from the set
. The \emph{minimum rank} (respectively, \emph{rational minimum
rank}) of a sign pattern matrix is the minimum of the ranks of the
real (respectively, rational) matrices whose entries have signs equal to the
corresponding entries of . A sign pattern is said to be
\emph{condensed} if has no zero row or column and no two rows or
columns are identical or negatives of each other. In this paper, a new direct
connection between condensed sign patterns with minimum rank
and point-- hyperplane configurations in is
established. In particular, condensed sign patterns with minimum rank 3 are
closed related to point--line configurations on the plane. It is proved that
for any sign pattern with minimum rank , if the number of
zero entries on each column of is at most , then the rational
minimum rank of is also . Furthermore, we construct the smallest
known sign pattern whose minimum rank is 3 but whose rational minimum rank is
greater than 3.Comment: 13 pages; presented at the 2013 ILAS conferenc
The complexity of computing the minimum rank of a sign pattern matrix
We show that computing the minimum rank of a sign pattern matrix is NP hard.
Our proof is based on a simple but useful connection between minimum ranks of
sign pattern matrices and the stretchability problem for pseudolines
arrangements. In fact, our hardness result shows that it is already hard to
determine if the minimum rank of a sign pattern matrix is . We
complement this by giving a polynomial time algorithm for determining if a
given sign pattern matrix has minimum rank .
Our result answers one of the open problems from Linial et al.
[Combinatorica, 27(4):439--463, 2007].Comment: 16 page
Low-Rank Matrix Approximation in the Infinity Norm
The low-rank matrix approximation problem with respect to the entry-wise
-norm is the following: given a matrix and a factorization
rank , find a matrix whose rank is at most and that minimizes
. In this paper, we prove that the decision
variant of this problem for is NP-complete using a reduction from the
problem `not all equal 3SAT'. We also analyze several cases when the problem
can be solved in polynomial time, and propose a simple practical heuristic
algorithm which we apply on the problem of the recovery of a quantized low-rank
matrix.Comment: 12 pages, 3 table
Robust Matrix Decomposition with Outliers
Suppose a given observation matrix can be decomposed as the sum of a low-rank
matrix and a sparse matrix (outliers), and the goal is to recover these
individual components from the observed sum. Such additive decompositions have
applications in a variety of numerical problems including system
identification, latent variable graphical modeling, and principal components
analysis. We study conditions under which recovering such a decomposition is
possible via a combination of norm and trace norm minimization. We are
specifically interested in the question of how many outliers are allowed so
that convex programming can still achieve accurate recovery, and we obtain
stronger recovery guarantees than previous studies. Moreover, we do not assume
that the spatial pattern of outliers is random, which stands in contrast to
related analyses under such assumptions via matrix completion.Comment: Corrected comparisons to previous work of Candes et al (2009
Link Prediction in Graphs with Autoregressive Features
In the paper, we consider the problem of link prediction in time-evolving
graphs. We assume that certain graph features, such as the node degree, follow
a vector autoregressive (VAR) model and we propose to use this information to
improve the accuracy of prediction. Our strategy involves a joint optimization
procedure over the space of adjacency matrices and VAR matrices which takes
into account both sparsity and low rank properties of the matrices. Oracle
inequalities are derived and illustrate the trade-offs in the choice of
smoothing parameters when modeling the joint effect of sparsity and low rank
property. The estimate is computed efficiently using proximal methods through a
generalized forward-backward agorithm.Comment: NIPS 201
A large covariance matrix estimator under intermediate spikiness regimes
The present paper concerns large covariance matrix estimation via composite
minimization under the assumption of low rank plus sparse structure. In this
approach, the low rank plus sparse decomposition of the covariance matrix is
recovered by least squares minimization under nuclear norm plus norm
penalization. This paper proposes a new estimator of that family based on an
additional least-squares re-optimization step aimed at un-shrinking the
eigenvalues of the low rank component estimated at the first step. We prove
that such un-shrinkage causes the final estimate to approach the target as
closely as possible in Frobenius norm while recovering exactly the underlying
low rank and sparsity pattern. Consistency is guaranteed when is at least
, provided that the maximum number of non-zeros per
row in the sparse component is with .
Consistent recovery is ensured if the latent eigenvalues scale to ,
, while rank consistency is ensured if .
The resulting estimator is called UNALCE (UNshrunk ALgebraic Covariance
Estimator) and is shown to outperform state of the art estimators, especially
for what concerns fitting properties and sparsity pattern detection. The
effectiveness of UNALCE is highlighted on a real example regarding ECB banking
supervisory data
- …