32,564 research outputs found

    Minimum ranks of sign patterns via sign vectors and duality

    Full text link
    A {\it sign pattern matrix} is a matrix whose entries are from the set {+,,0}\{+,-, 0\}. The minimum rank of a sign pattern matrix AA is the minimum of the ranks of the real matrices whose entries have signs equal to the corresponding entries of AA. It is shown in this paper that for any m×nm \times n sign pattern AA with minimum rank n2n-2, rational realization of the minimum rank is possible. This is done using a new approach involving sign vectors and duality. It is shown that for each integer n9n\geq 9, there exists a nonnegative integer mm such that there exists an n×mn\times m sign pattern matrix with minimum rank n3n-3 for which rational realization is not possible. A characterization of m×nm\times n sign patterns AA with minimum rank n1n-1 is given (which solves an open problem in Brualdi et al. \cite{Bru10}), along with a more general description of sign patterns with minimum rank rr, in terms of sign vectors of certain subspaces. A number of results on the maximum and minimum numbers of sign vectors of kk-dimensional subspaces of Rn\mathbb R^n are obtained. In particular, it is shown that the maximum number of sign vectors of 22-dimensional subspaces of Rn\mathbb R^n is 4n+14n+1. Several related open problems are stated along the way

    Sign rank versus VC dimension

    Full text link
    This work studies the maximum possible sign rank of N×NN \times N sign matrices with a given VC dimension dd. For d=1d=1, this maximum is {three}. For d=2d=2, this maximum is Θ~(N1/2)\tilde{\Theta}(N^{1/2}). For d>2d >2, similar but slightly less accurate statements hold. {The lower bounds improve over previous ones by Ben-David et al., and the upper bounds are novel.} The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given VC dimension, and the number of maximum classes of a given VC dimension -- answering a question of Frankl from '89, and (ii) design an efficient algorithm that provides an O(N/log(N))O(N/\log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the N×NN \times N adjacency matrix of a Δ\Delta regular graph with a second eigenvalue of absolute value λ\lambda and ΔN/2\Delta \leq N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ\Delta/\lambda. We use this connection to prove the existence of a maximum class C{±1}NC\subseteq\{\pm 1\}^N with VC dimension 22 and sign rank Θ~(N1/2)\tilde{\Theta}(N^{1/2}). This answers a question of Ben-David et al.~regarding the sign rank of large VC classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics.Comment: 33 pages. This is a revised version of the paper "Sign rank versus VC dimension". Additional results in this version: (i) Estimates on the number of maximum VC classes (answering a question of Frankl from '89). (ii) Estimates on the sign rank of large VC classes (answering a question of Ben-David et al. from '03). (iii) A discussion on the computational complexity of computing the sign-ran

    Recovery of Coherent Data via Low-Rank Dictionary Pursuit

    Full text link
    The recently established RPCA method provides us a convenient way to restore low-rank matrices from grossly corrupted observations. While elegant in theory and powerful in reality, RPCA may be not an ultimate solution to the low-rank matrix recovery problem. Indeed, its performance may not be perfect even when data are strictly low-rank. This is because conventional RPCA ignores the clustering structures of the data which are ubiquitous in modern applications. As the number of cluster grows, the coherence of data keeps increasing, and accordingly, the recovery performance of RPCA degrades. We show that the challenges raised by coherent data (i.e., the data with high coherence) could be alleviated by Low-Rank Representation (LRR), provided that the dictionary in LRR is configured appropriately. More precisely, we mathematically prove that if the dictionary itself is low-rank then LRR is immune to the coherence parameter which increases with the underlying cluster number. This provides an elementary principle for dealing with coherent data. Subsequently, we devise a practical algorithm to obtain proper dictionaries in unsupervised environments. Our extensive experiments on randomly generated matrices verify our claims

    Minimum (maximum) rank of tensors and the sign nonsingular tensors

    Full text link
    In this paper, we define the minimum (maximum) rank, term rank and the sign nonsingular of tensors. The sufficiency and necessity for the minimum rank of a real tensor to be 11 is given. And we show that the maximum rank of a tensor is not less than the term rank. We also prove that the minimum rank of a sign nonsingular tensor is not less than the dimension of it. And we get some characterizations of a tensor having sign left or sign right inverses

    Sign patterns with minimum rank 3 and point-line configurations

    Full text link
    A \emph{sign pattern (matrix)} is a matrix whose entries are from the set {+,,0}\{+, -, 0\}. The \emph{minimum rank} (respectively, \emph{rational minimum rank}) of a sign pattern matrix A\cal A is the minimum of the ranks of the real (respectively, rational) matrices whose entries have signs equal to the corresponding entries of A\cal A. A sign pattern A\cal A is said to be \emph{condensed} if A\cal A has no zero row or column and no two rows or columns are identical or negatives of each other. In this paper, a new direct connection between condensed m×nm \times n sign patterns with minimum rank rr and mm point--nn hyperplane configurations in Rr1{\mathbb R}^{r-1} is established. In particular, condensed sign patterns with minimum rank 3 are closed related to point--line configurations on the plane. It is proved that for any sign pattern A\cal A with minimum rank r3r\geq 3, if the number of zero entries on each column of A\cal A is at most r1r-1, then the rational minimum rank of A\cal A is also rr. Furthermore, we construct the smallest known sign pattern whose minimum rank is 3 but whose rational minimum rank is greater than 3.Comment: 13 pages; presented at the 2013 ILAS conferenc

    The complexity of computing the minimum rank of a sign pattern matrix

    Full text link
    We show that computing the minimum rank of a sign pattern matrix is NP hard. Our proof is based on a simple but useful connection between minimum ranks of sign pattern matrices and the stretchability problem for pseudolines arrangements. In fact, our hardness result shows that it is already hard to determine if the minimum rank of a sign pattern matrix is 3\leq 3. We complement this by giving a polynomial time algorithm for determining if a given sign pattern matrix has minimum rank 2\leq 2. Our result answers one of the open problems from Linial et al. [Combinatorica, 27(4):439--463, 2007].Comment: 16 page

    Low-Rank Matrix Approximation in the Infinity Norm

    Full text link
    The low-rank matrix approximation problem with respect to the entry-wise \ell_{\infty}-norm is the following: given a matrix MM and a factorization rank rr, find a matrix XX whose rank is at most rr and that minimizes maxi,jMijXij\max_{i,j} |M_{ij} - X_{ij}|. In this paper, we prove that the decision variant of this problem for r=1r=1 is NP-complete using a reduction from the problem `not all equal 3SAT'. We also analyze several cases when the problem can be solved in polynomial time, and propose a simple practical heuristic algorithm which we apply on the problem of the recovery of a quantized low-rank matrix.Comment: 12 pages, 3 table

    Robust Matrix Decomposition with Outliers

    Full text link
    Suppose a given observation matrix can be decomposed as the sum of a low-rank matrix and a sparse matrix (outliers), and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identification, latent variable graphical modeling, and principal components analysis. We study conditions under which recovering such a decomposition is possible via a combination of 1\ell_1 norm and trace norm minimization. We are specifically interested in the question of how many outliers are allowed so that convex programming can still achieve accurate recovery, and we obtain stronger recovery guarantees than previous studies. Moreover, we do not assume that the spatial pattern of outliers is random, which stands in contrast to related analyses under such assumptions via matrix completion.Comment: Corrected comparisons to previous work of Candes et al (2009

    Link Prediction in Graphs with Autoregressive Features

    Full text link
    In the paper, we consider the problem of link prediction in time-evolving graphs. We assume that certain graph features, such as the node degree, follow a vector autoregressive (VAR) model and we propose to use this information to improve the accuracy of prediction. Our strategy involves a joint optimization procedure over the space of adjacency matrices and VAR matrices which takes into account both sparsity and low rank properties of the matrices. Oracle inequalities are derived and illustrate the trade-offs in the choice of smoothing parameters when modeling the joint effect of sparsity and low rank property. The estimate is computed efficiently using proximal methods through a generalized forward-backward agorithm.Comment: NIPS 201

    A large covariance matrix estimator under intermediate spikiness regimes

    Full text link
    The present paper concerns large covariance matrix estimation via composite minimization under the assumption of low rank plus sparse structure. In this approach, the low rank plus sparse decomposition of the covariance matrix is recovered by least squares minimization under nuclear norm plus l1l_1 norm penalization. This paper proposes a new estimator of that family based on an additional least-squares re-optimization step aimed at un-shrinking the eigenvalues of the low rank component estimated at the first step. We prove that such un-shrinkage causes the final estimate to approach the target as closely as possible in Frobenius norm while recovering exactly the underlying low rank and sparsity pattern. Consistency is guaranteed when nn is at least O(p32δ)O(p^{\frac{3}{2}\delta}), provided that the maximum number of non-zeros per row in the sparse component is O(pδ)O(p^{\delta}) with δ12\delta \leq \frac{1}{2}. Consistent recovery is ensured if the latent eigenvalues scale to pαp^{\alpha}, α[0,1]\alpha \in[0,1], while rank consistency is ensured if δα\delta \leq \alpha. The resulting estimator is called UNALCE (UNshrunk ALgebraic Covariance Estimator) and is shown to outperform state of the art estimators, especially for what concerns fitting properties and sparsity pattern detection. The effectiveness of UNALCE is highlighted on a real example regarding ECB banking supervisory data
    corecore