555 research outputs found
Background Subtraction via Generalized Fused Lasso Foreground Modeling
Background Subtraction (BS) is one of the key steps in video analysis. Many
background models have been proposed and achieved promising performance on
public data sets. However, due to challenges such as illumination change,
dynamic background etc. the resulted foreground segmentation often consists of
holes as well as background noise. In this regard, we consider generalized
fused lasso regularization to quest for intact structured foregrounds. Together
with certain assumptions about the background, such as the low-rank assumption
or the sparse-composition assumption (depending on whether pure background
frames are provided), we formulate BS as a matrix decomposition problem using
regularization terms for both the foreground and background matrices. Moreover,
under the proposed formulation, the two generally distinctive background
assumptions can be solved in a unified manner. The optimization was carried out
via applying the augmented Lagrange multiplier (ALM) method in such a way that
a fast parametric-flow algorithm is used for updating the foreground matrix.
Experimental results on several popular BS data sets demonstrate the advantage
of the proposed model compared to state-of-the-arts
Recommended from our members
Multi-task learing for subspace segmentation
Subspace segmentation is the process of clustering a set of data points that are assumed to lie on the union of multiple linear or affine subspaces, and is increasingly being recognized as a fundamental tool for data analysis in high dimensional settings. Arguably one of the most successful approaches is based on the observation that the sparsest representation of a given point with respect to a dictionary formed by the others involves nonzero coefficients associated with points originating in the same subspace. Such sparse representations are computed independently for each data point via β1-norm minimization and then combined into an affinity matrix for use by a final spectral clustering step. The downside of this procedure is two-fold. First, unlike canonical compressive sensing scenarios with ideally-randomized dictionaries, the data-dependent dictionaries here are unavoidably highly structured, disrupting many of the favorable properties of the β1 norm. Secondly, by treating each data point independently, we ignore useful relationships between points that can be leveraged for jointly computing such sparse representations. Consequently, we motivate a multi-task learning-based framework for learning coupled sparse representations leading to a segmentation pipeline that is both robust against correlation structure and tailored to generate an optimal affinity matrix. Theoretical analysis and empirical tests are provided to support these claims.Y. Wang is sponsored by the University of Cambridge Overseas Trust. Y. Wang and Q. Ling are partially supported by sponsorship from Microsoft Research Asia. Q. Ling is also supported in part by NSFC grant 61004137. W. Chen is supported by EPSRC Research Grant EP/K033700/1 and the Natural Science Foundation of China 61401018.This is the final version of the article. It first appeared from JMLR via http://jmlr.org/proceedings/papers/v37/wangc15.htm
Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering
State-of-the-art subspace clustering methods are based on expressing each
data point as a linear combination of other data points while regularizing the
matrix of coefficients with , or nuclear norms.
regularization is guaranteed to give a subspace-preserving affinity (i.e.,
there are no connections between points from different subspaces) under broad
theoretical conditions, but the clusters may not be connected. and
nuclear norm regularization often improve connectivity, but give a
subspace-preserving affinity only for independent subspaces. Mixed ,
and nuclear norm regularizations offer a balance between the
subspace-preserving and connectedness properties, but this comes at the cost of
increased computational complexity. This paper studies the geometry of the
elastic net regularizer (a mixture of the and norms) and uses
it to derive a provably correct and scalable active set method for finding the
optimal coefficients. Our geometric analysis also provides a theoretical
justification and a geometric interpretation for the balance between the
connectedness (due to regularization) and subspace-preserving (due to
regularization) properties for elastic net subspace clustering. Our
experiments show that the proposed active set method not only achieves
state-of-the-art clustering performance, but also efficiently handles
large-scale datasets.Comment: 15 pages, 6 figures, accepted to CVPR 2016 for oral presentatio
- β¦