276,204 research outputs found
Regression and Singular Value Decomposition in Dynamic Graphs
Most of real-world graphs are {\em dynamic}, i.e., they change over time.
However, while problems such as regression and Singular Value Decomposition
(SVD) have been studied for {\em static} graphs, they have not been
investigated for {\em dynamic} graphs, yet. In this paper, we introduce,
motivate and study regression and SVD over dynamic graphs. First, we present
the notion of {\em update-efficient matrix embedding} that defines the
conditions sufficient for a matrix embedding to be used for the dynamic graph
regression problem (under norm). We prove that given an
update-efficient matrix embedding (e.g., adjacency matrix), after an update
operation in the graph, the optimal solution of the graph regression problem
for the revised graph can be computed in time. We also study dynamic
graph regression under least absolute deviation. Then, we characterize a class
of matrix embeddings that can be used to efficiently update SVD of a dynamic
graph. For adjacency matrix and Laplacian matrix, we study those graph update
operations for which SVD (and low rank approximation) can be updated
efficiently
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
We present a natural generalization of the recent low rank + sparse matrix
decomposition and consider the decomposition of matrices into components of
multiple scales. Such decomposition is well motivated in practice as data
matrices often exhibit local correlations in multiple scales. Concretely, we
propose a multi-scale low rank modeling that represents a data matrix as a sum
of block-wise low rank matrices with increasing scales of block sizes. We then
consider the inverse problem of decomposing the data matrix into its
multi-scale low rank components and approach the problem via a convex
formulation. Theoretically, we show that under various incoherence conditions,
the convex program recovers the multi-scale low rank components \revised{either
exactly or approximately}. Practically, we provide guidance on selecting the
regularization parameters and incorporate cycle spinning to reduce blocking
artifacts. Experimentally, we show that the multi-scale low rank decomposition
provides a more intuitive decomposition than conventional low rank methods and
demonstrate its effectiveness in four applications, including illumination
normalization for face images, motion separation for surveillance videos,
multi-scale modeling of the dynamic contrast enhanced magnetic resonance
imaging and collaborative filtering exploiting age information
Deterministic Polynomial Time Algorithms for Matrix Completion Problems
We present new deterministic algorithms for several cases of the maximum rank
matrix completion problem (for short matrix completion), i.e. the problem of
assigning values to the variables in a given symbolic matrix as to maximize the
resulting matrix rank. Matrix completion belongs to the fundamental problems in
computational complexity with numerous important algorithmic applications,
among others, in computing dynamic transitive closures or multicast network
codings (Harvey et al SODA 2005, Harvey et al SODA 2006).
We design efficient deterministic algorithms for common generalizations of
the results of Lovasz and Geelen on this problem by allowing linear functions
in the entries of the input matrix such that the submatrices corresponding to
each variable have rank one. We present also a deterministic polynomial time
algorithm for finding the minimal number of generators of a given module
structure given by matrices. We establish further several hardness results
related to matrix algebras and modules. As a result we connect the classical
problem of polynomial identity testing with checking surjectivity (or
injectivity) between two given modules. One of the elements of our algorithm is
a construction of a greedy algorithm for finding a maximum rank element in the
more general setting of the problem. The proof methods used in this paper could
be also of independent interest.Comment: 14 pages, preliminar
Randomized Dynamic Mode Decomposition
This paper presents a randomized algorithm for computing the near-optimal
low-rank dynamic mode decomposition (DMD). Randomized algorithms are emerging
techniques to compute low-rank matrix approximations at a fraction of the cost
of deterministic algorithms, easing the computational challenges arising in the
area of `big data'. The idea is to derive a small matrix from the
high-dimensional data, which is then used to efficiently compute the dynamic
modes and eigenvalues. The algorithm is presented in a modular probabilistic
framework, and the approximation quality can be controlled via oversampling and
power iterations. The effectiveness of the resulting randomized DMD algorithm
is demonstrated on several benchmark examples of increasing complexity,
providing an accurate and efficient approach to extract spatiotemporal coherent
structures from big data in a framework that scales with the intrinsic rank of
the data, rather than the ambient measurement dimension. For this work we
assume that the dynamics of the problem under consideration is evolving on a
low-dimensional subspace that is well characterized by a fast decaying singular
value spectrum
Weighted Schatten -Norm Minimization for Image Denoising and Background Subtraction
Low rank matrix approximation (LRMA), which aims to recover the underlying
low rank matrix from its degraded observation, has a wide range of applications
in computer vision. The latest LRMA methods resort to using the nuclear norm
minimization (NNM) as a convex relaxation of the nonconvex rank minimization.
However, NNM tends to over-shrink the rank components and treats the different
rank components equally, limiting its flexibility in practical applications. We
propose a more flexible model, namely the Weighted Schatten -Norm
Minimization (WSNM), to generalize the NNM to the Schatten -norm
minimization with weights assigned to different singular values. The proposed
WSNM not only gives better approximation to the original low-rank assumption,
but also considers the importance of different rank components. We analyze the
solution of WSNM and prove that, under certain weights permutation, WSNM can be
equivalently transformed into independent non-convex -norm subproblems,
whose global optimum can be efficiently solved by generalized iterated
shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g.,
image denoising and background subtraction. Extensive experimental results
show, both qualitatively and quantitatively, that the proposed WSNM can more
effectively remove noise, and model complex and dynamic scenes compared with
state-of-the-art methods.Comment: 13 pages, 11 figure
- …