45 research outputs found
Structured low-rank matrix learning: algorithms and applications
We consider the problem of learning a low-rank matrix, constrained to lie in
a linear subspace, and introduce a novel factorization for modeling such
matrices. A salient feature of the proposed factorization scheme is it
decouples the low-rank and the structural constraints onto separate factors. We
formulate the optimization problem on the Riemannian spectrahedron manifold,
where the Riemannian framework allows to develop computationally efficient
conjugate gradient and trust-region algorithms. Experiments on problems such as
standard/robust/non-negative matrix completion, Hankel matrix learning and
multi-task learning demonstrate the efficacy of our approach. A shorter version
of this work has been published in ICML'18.Comment: Accepted in ICML'1
Scaled stochastic gradient descent for low-rank matrix completion
The paper looks at a scaled variant of the stochastic gradient descent
algorithm for the matrix completion problem. Specifically, we propose a novel
matrix-scaling of the partial derivatives that acts as an efficient
preconditioning for the standard stochastic gradient descent algorithm. This
proposed matrix-scaling provides a trade-off between local and global second
order information. It also resolves the issue of scale invariance that exists
in matrix factorization models. The overall computational complexity is linear
with the number of known entries, thereby extending to a large-scale setup.
Numerical comparisons show that the proposed algorithm competes favorably with
state-of-the-art algorithms on various different benchmarks.Comment: Accepted to IEEE CDC 201
Riemannian preconditioning for tensor completion
We propose a novel Riemannian preconditioning approach for the tensor
completion problem with rank constraint. A Riemannian metric or inner product
is proposed that exploits the least-squares structure of the cost function and
takes into account the structured symmetry in Tucker decomposition. The
specific metric allows to use the versatile framework of Riemannian
optimization on quotient manifolds to develop a preconditioned nonlinear
conjugate gradient algorithm for the problem. To this end, concrete matrix
representations of various optimization-related ingredients are listed.
Numerical comparisons suggest that our proposed algorithm robustly outperforms
state-of-the-art algorithms across different problem instances encompassing
various synthetic and real-world datasets.Comment: Supplementary material included in the paper. An extension of the
paper is in arXiv:1605.0825
Topological Interference Management with User Admission Control via Riemannian Optimization
Topological interference management (TIM) provides a promising way to manage
interference only based on the network connectivity information. Previous works
on the TIM problem mainly focus on using the index coding approach and graph
theory to establish conditions of network topologies to achieve the feasibility
of topological interference management. In this paper, we propose a novel user
admission control approach via sparse and low-rank optimization to maximize the
number of admitted users for achieving the feasibility of topological
interference management. To assist efficient algorithms design for the
formulated rank-constrained (i.e., degrees-of-freedom (DoF) allocation) l0-norm
maximization (i.e., user capacity maximization) problem, we propose a
regularized smoothed l1- norm minimization approach to induce sparsity pattern,
thereby guiding the user selection. We further develop a Riemannian
trust-region algorithm to solve the resulting rank-constrained smooth
optimization problem via exploiting the quotient manifold of fixed-rank
matrices. Simulation results demonstrate the effectiveness and near-optimal
performance of the proposed Riemannian algorithm to maximize the number of
admitted users for topological interference management.Comment: arXiv admin note: text overlap with arXiv:1604.0432
Riemannian joint dimensionality reduction and dictionary learning on symmetric positive definite manifold
Dictionary leaning (DL) and dimensionality reduction (DR) are powerful tools
to analyze high-dimensional noisy signals. This paper presents a proposal of a
novel Riemannian joint dimensionality reduction and dictionary learning
(R-JDRDL) on symmetric positive definite (SPD) manifolds for classification
tasks. The joint learning considers the interaction between dimensionality
reduction and dictionary learning procedures by connecting them into a unified
framework. We exploit a Riemannian optimization framework for solving DL and DR
problems jointly. Finally, we demonstrate that the proposed R-JDRDL outperforms
existing state-of-the-arts algorithms when used for image classification tasks.Comment: European Signal Processing Conference (EUSIPCO 2018
Low-rank tensor completion: a Riemannian manifold preconditioning approach
We propose a novel Riemannian manifold preconditioning approach for the
tensor completion problem with rank constraint. A novel Riemannian metric or
inner product is proposed that exploits the least-squares structure of the cost
function and takes into account the structured symmetry that exists in Tucker
decomposition. The specific metric allows to use the versatile framework of
Riemannian optimization on quotient manifolds to develop preconditioned
nonlinear conjugate gradient and stochastic gradient descent algorithms for
batch and online setups, respectively. Concrete matrix representations of
various optimization-related ingredients are listed. Numerical comparisons
suggest that our proposed algorithms robustly outperform state-of-the-art
algorithms across different synthetic and real-world datasets.Comment: The 33rd International Conference on Machine Learning (ICML 2016).
arXiv admin note: substantial text overlap with arXiv:1506.0215
A Sparse and Low-Rank Optimization Framework for Index Coding via Riemannian Optimization
Side information provides a pivotal role for message delivery in many
communication scenarios to accommodate increasingly large data sets, e.g.,
caching networks. Although index coding provides a fundamental modeling
framework to exploit the benefits of side information, the index coding problem
itself still remains open and only a few instances have been solved. In this
paper, we propose a novel sparse and low- rank optimization modeling framework
for the index coding problem to characterize the tradeoff between the amount of
side information and the achievable data rate. Specifically, sparsity of the
model measures the amount of side information, while low- rankness represents
the achievable data rate. The resulting sparse and low-rank optimization
problem has non-convex sparsity inducing objective and non-convex rank
constraint. To address the coupled challenges in objective and constraint, we
propose a novel Riemannian optimization framework by exploiting the quotient
manifold geometry of fixed-rank matrices, accompanied by a smooth sparsity
inducing surrogate. Simulation results demonstrate the appealing sparsity and
low-rankness tradeoff in the proposed model, thereby revealing the tradeoff
between the amount of side information and the achievable data rate in the
index coding problem.Comment: Simulation code is available at https://bamdevmishra.com/indexcoding
A two-dimensional decomposition approach for matrix completion through gossip
Factoring a matrix into two low rank matrices is at the heart of many
problems. The problem of matrix completion especially uses it to decompose a
sparse matrix into two non sparse, low rank matrices which can then be used to
predict unknown entries of the original matrix. We present a scalable and
decentralized approach in which instead of learning two factors for the
original input matrix, we decompose the original matrix into a grid blocks,
each of whose factors can be individually learned just by communicating
(gossiping) with neighboring blocks. This eliminates any need for a central
server. We show that our algorithm performs well on both synthetic and real
datasets.Comment: Appeared in the Emergent Communication Workshop at NIPS 201
Riemannian stochastic variance reduced gradient on Grassmann manifold
Stochastic variance reduction algorithms have recently become popular for
minimizing the average of a large, but finite, number of loss functions. In
this paper, we propose a novel Riemannian extension of the Euclidean stochastic
variance reduced gradient algorithm (R-SVRG) to a compact manifold search
space. To this end, we show the developments on the Grassmann manifold. The key
challenges of averaging, addition, and subtraction of multiple gradients are
addressed with notions like logarithm mapping and parallel translation of
vectors on the Grassmann manifold. We present a global convergence analysis of
the proposed algorithm with decay step-sizes and a local convergence rate
analysis under fixed step-size with some natural assumptions. The proposed
algorithm is applied on a number of problems on the Grassmann manifold like
principal components analysis, low-rank matrix completion, and the Karcher mean
computation. In all these cases, the proposed algorithm outperforms the
standard Riemannian stochastic gradient descent algorithm
A Riemannian gossip approach to decentralized matrix completion
In this paper, we propose novel gossip algorithms for the low-rank
decentralized matrix completion problem. The proposed approach is on the
Riemannian Grassmann manifold that allows local matrix completion by different
agents while achieving asymptotic consensus on the global low-rank factors. The
resulting approach is scalable and parallelizable. Our numerical experiments
show the good performance of the proposed algorithms on various benchmarks.Comment: Under revie