464 research outputs found
Low-Rank Modeling and Its Applications in Image Analysis
Low-rank modeling generally refers to a class of methods that solve problems
by representing variables of interest as low-rank matrices. It has achieved
great success in various fields including computer vision, data mining, signal
processing and bioinformatics. Recently, much progress has been made in
theories, algorithms and applications of low-rank modeling, such as exact
low-rank matrix recovery via convex programming and matrix completion applied
to collaborative filtering. These advances have brought more and more
attentions to this topic. In this paper, we review the recent advance of
low-rank modeling, the state-of-the-art algorithms, and related applications in
image analysis. We first give an overview to the concept of low-rank modeling
and challenging problems in this area. Then, we summarize the models and
algorithms for low-rank matrix recovery and illustrate their advantages and
limitations with numerical experiments. Next, we introduce a few applications
of low-rank modeling in the context of image analysis. Finally, we conclude
this paper with some discussions.Comment: To appear in ACM Computing Survey
A Survey on Matrix Completion: Perspective of Signal Processing
Matrix completion (MC) is a promising technique which is able to recover an
intact matrix with low-rank property from sub-sampled/incomplete data. Its
application varies from computer vision, signal processing to wireless network,
and thereby receives much attention in the past several years. There are plenty
of works addressing the behaviors and applications of MC methodologies. This
work provides a comprehensive review for MC approaches from the perspective of
signal processing. In particular, the MC problem is first grouped into six
optimization problems to help readers understand MC algorithms. Next, four
representative types of optimization algorithms solving the MC problem are
reviewed. Ultimately, three different application fields of MC are described
and evaluated.Comment: 12 pages, 9 figure
Fast Proximal Linearized Alternating Direction Method of Multiplier with Parallel Splitting
The Augmented Lagragian Method (ALM) and Alternating Direction Method of
Multiplier (ADMM) have been powerful optimization methods for general convex
programming subject to linear constraint. We consider the convex problem whose
objective consists of a smooth part and a nonsmooth but simple part. We propose
the Fast Proximal Augmented Lagragian Method (Fast PALM) which achieves the
convergence rate , compared with by the traditional PALM. In
order to further reduce the per-iteration complexity and handle the
multi-blocks problem, we propose the Fast Proximal ADMM with Parallel Splitting
(Fast PL-ADMM-PS) method. It also partially improves the rate related to the
smooth part of the objective function. Experimental results on both synthesized
and real world data demonstrate that our fast methods significantly improve the
previous PALM and ADMM.Comment: AAAI 201
Ensemble Joint Sparse Low Rank Matrix Decomposition for Thermography Diagnosis System
Composite is widely used in the aircraft industry and it is essential for manufacturers to monitor its health and quality. The most commonly found defects of composite are debonds and delamination. Different inner defects with complex irregular shape is difficult to be diagnosed by using conventional thermal imaging methods. In this paper, an ensemble joint sparse low rank matrix decomposition (EJSLRMD) algorithm is proposed by applying the optical pulse thermography (OPT) diagnosis system. The proposed algorithm jointly models the low rank and sparse pattern by using concatenated feature space. In particular, the weak defects information can be separated from strong noise and the resolution contrast of the defects has significantly been improved. Ensemble iterative sparse modelling are conducted to further enhance the weak information as well as reducing the computational cost. In order to show the robustness and efficacy of the model, experiments are conducted to detect the inner debond on multiple carbon fiber reinforced polymer (CFRP) composites. A comparative analysis is presented with general OPT algorithms. Not withstand above, the proposed model has been evaluated on synthetic data and compared with other low rank and sparse matrix decomposition algorithms
Panoramic Robust PCA for Foreground-Background Separation on Noisy, Free-Motion Camera Video
This work presents a new robust PCA method for foreground-background
separation on freely moving camera video with possible dense and sparse
corruptions. Our proposed method registers the frames of the corrupted video
and then encodes the varying perspective arising from camera motion as missing
data in a global model. This formulation allows our algorithm to produce a
panoramic background component that automatically stitches together corrupted
data from partially overlapping frames to reconstruct the full field of view.
We model the registered video as the sum of a low-rank component that captures
the background, a smooth component that captures the dynamic foreground of the
scene, and a sparse component that isolates possible outliers and other sparse
corruptions in the video. The low-rank portion of our model is based on a
recent low-rank matrix estimator (OptShrink) that has been shown to yield
superior low-rank subspace estimates in practice. To estimate the smooth
foreground component of our model, we use a weighted total variation framework
that enables our method to reliably decouple the true foreground of the video
from sparse corruptions. We perform extensive numerical experiments on both
static and moving camera video subject to a variety of dense and sparse
corruptions. Our experiments demonstrate the state-of-the-art performance of
our proposed method compared to existing methods both in terms of foreground
and background estimation accuracy.Comment: IEEE TCI. Project webpage: https://gaochen315.github.io/pRPCA/ Code:
https://github.com/gaochen315/panoramicRPC
Online Robust Subspace Tracking from Partial Information
This paper presents GRASTA (Grassmannian Robust Adaptive Subspace Tracking
Algorithm), an efficient and robust online algorithm for tracking subspaces
from highly incomplete information. The algorithm uses a robust -norm cost
function in order to estimate and track non-stationary subspaces when the
streaming data vectors are corrupted with outliers. We apply GRASTA to the
problems of robust matrix completion and real-time separation of background
from foreground in video. In this second application, we show that GRASTA
performs high-quality separation of moving objects from background at
exceptional speeds: In one popular benchmark video example, GRASTA achieves a
rate of 57 frames per second, even when run in MATLAB on a personal laptop.Comment: 28 pages, 12 figure
Efficient Low-Rank Semidefinite Programming with Robust Loss Functions
In real-world applications, it is important for machine learning algorithms
to be robust against data outliers or corruptions. In this paper, we focus on
improving the robustness of a large class of learning algorithms that are
formulated as low-rank semi-definite programming (SDP) problems. Traditional
formulations use square loss, which is notorious for being sensitive to
outliers. We propose to replace this with more robust noise models, including
the -loss and other nonconvex losses. However, the resultant
optimization problem becomes difficult as the objective is no longer convex or
smooth. To alleviate this problem, we design an efficient algorithm based on
majorization-minimization. The crux is on constructing a good optimization
surrogate, and we show that this surrogate can be efficiently obtained by the
alternating direction method of multipliers (ADMM). By properly monitoring
ADMM's convergence, the proposed algorithm is empirically efficient and also
theoretically guaranteed to converge to a critical point. Extensive experiments
are performed on four machine learning applications using both synthetic and
real-world data sets. Results show that the proposed algorithm is not only fast
but also has better performance than the state-of-the-art.Comment: Preprint version. Final version is accepted to "IEEE Transactions on
Pattern Analysis and Machine Intelligence
In-network Sparsity-regularized Rank Minimization: Algorithms and Applications
Given a limited number of entries from the superposition of a low-rank matrix
plus the product of a known fat compression matrix times a sparse matrix,
recovery of the low-rank and sparse components is a fundamental task subsuming
compressed sensing, matrix completion, and principal components pursuit. This
paper develops algorithms for distributed sparsity-regularized rank
minimization over networks, when the nuclear- and -norm are used as
surrogates to the rank and nonzero entry counts of the sought matrices,
respectively. While nuclear-norm minimization has well-documented merits when
centralized processing is viable, non-separability of the singular-value sum
challenges its distributed minimization. To overcome this limitation, an
alternative characterization of the nuclear norm is adopted which leads to a
separable, yet non-convex cost minimized via the alternating-direction method
of multipliers. The novel distributed iterations entail reduced-complexity
per-node tasks, and affordable message passing among single-hop neighbors.
Interestingly, upon convergence the distributed (non-convex) estimator provably
attains the global optimum of its centralized counterpart, regardless of
initialization. Several application domains are outlined to highlight the
generality and impact of the proposed framework. These include unveiling
traffic anomalies in backbone networks, predicting networkwide path latencies,
and mapping the RF ambiance using wireless cognitive radios. Simulations with
synthetic and real network data corroborate the convergence of the novel
distributed algorithm, and its centralized performance guarantees.Comment: 30 pages, submitted for publication on the IEEE Trans. Signal Proces
A Survey on Learning to Hash
Nearest neighbor search is a problem of finding the data points from the
database such that the distances from them to the query point are the smallest.
Learning to hash is one of the major solutions to this problem and has been
widely studied recently. In this paper, we present a comprehensive survey of
the learning to hash algorithms, categorize them according to the manners of
preserving the similarities into: pairwise similarity preserving, multiwise
similarity preserving, implicit similarity preserving, as well as quantization,
and discuss their relations. We separate quantization from pairwise similarity
preserving as the objective function is very different though quantization, as
we show, can be derived from preserving the pairwise similarities. In addition,
we present the evaluation protocols, and the general performance analysis, and
point out that the quantization algorithms perform superiorly in terms of
search accuracy, search time cost, and space cost. Finally, we introduce a few
emerging topics.Comment: To appear in IEEE Transactions On Pattern Analysis and Machine
Intelligence (TPAMI
Learning Random Fourier Features by Hybrid Constrained Optimization
The kernel embedding algorithm is an important component for adapting kernel
methods to large datasets. Since the algorithm consumes a major computation
cost in the testing phase, we propose a novel teacher-learner framework of
learning computation-efficient kernel embeddings from specific data. In the
framework, the high-precision embeddings (teacher) transfer the data
information to the computation-efficient kernel embeddings (learner). We
jointly select informative embedding functions and pursue an orthogonal
transformation between two embeddings. We propose a novel approach of
constrained variational expectation maximization (CVEM), where the alternate
direction method of multiplier (ADMM) is applied over a nonconvex domain in the
maximization step. We also propose two specific formulations based on the
prevalent Random Fourier Feature (RFF), the masked and blocked version of
Computation-Efficient RFF (CERF), by imposing a random binary mask or a block
structure on the transformation matrix. By empirical studies of several
applications on different real-world datasets, we demonstrate that the CERF
significantly improves the performance of kernel methods upon the RFF, under
certain arithmetic operation requirements, and suitable for structured matrix
multiplication in Fastfood type algorithms
- …