190 research outputs found
Covariance Estimation from Compressive Data Partitions using a Projected Gradient-based Algorithm
Covariance matrix estimation techniques require high acquisition costs that
challenge the sampling systems' storing and transmission capabilities. For this
reason, various acquisition approaches have been developed to simultaneously
sense and compress the relevant information of the signal using random
projections. However, estimating the covariance matrix from the random
projections is an ill-posed problem that requires further information about the
data, such as sparsity, low rank, or stationary behavior. Furthermore, this
approach fails using high compression ratios. Therefore, this paper proposes an
algorithm based on the projected gradient method to recover a low-rank or
Toeplitz approximation of the covariance matrix. The proposed algorithm divides
the data into subsets projected onto different subspaces, assuming that each
subset contains an approximation of the signal statistics, improving the
inverse problem's condition. The error induced by this assumption is
analytically derived along with the convergence guarantees of the proposed
method. Extensive simulations show that the proposed algorithm can effectively
recover the covariance matrix of hyperspectral images with high compression
ratios (8-15% approx) in noisy scenarios. Additionally, simulations and
theoretical results show that filtering the gradient reduces the estimator's
error recovering up to twice the number of eigenvectors.Comment: submitted to IEEE Transactions on Image Processin
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Covariance Estimation in High Dimensions via Kronecker Product Expansions
This paper presents a new method for estimating high dimensional covariance
matrices. The method, permuted rank-penalized least-squares (PRLS), is based on
a Kronecker product series expansion of the true covariance matrix. Assuming an
i.i.d. Gaussian random sample, we establish high dimensional rates of
convergence to the true covariance as both the number of samples and the number
of variables go to infinity. For covariance matrices of low separation rank,
our results establish that PRLS has significantly faster convergence than the
standard sample covariance matrix (SCM) estimator. The convergence rate
captures a fundamental tradeoff between estimation error and approximation
error, thus providing a scalable covariance estimation framework in terms of
separation rank, similar to low rank approximation of covariance matrices. The
MSE convergence rates generalize the high dimensional rates recently obtained
for the ML Flip-flop algorithm for Kronecker product covariance estimation. We
show that a class of block Toeplitz covariance matrices is approximatable by
low separation rank and give bounds on the minimal separation rank that
ensures a given level of bias. Simulations are presented to validate the
theoretical bounds. As a real world application, we illustrate the utility of
the proposed Kronecker covariance estimator for spatio-temporal linear least
squares prediction of multivariate wind speed measurements.Comment: 47 pages, accepted to IEEE Transactions on Signal Processin
- …