2,049 research outputs found
Geometry-Aware Neighborhood Search for Learning Local Models for Image Reconstruction
Local learning of sparse image models has proven to be very effective to
solve inverse problems in many computer vision applications. To learn such
models, the data samples are often clustered using the K-means algorithm with
the Euclidean distance as a dissimilarity metric. However, the Euclidean
distance may not always be a good dissimilarity measure for comparing data
samples lying on a manifold. In this paper, we propose two algorithms for
determining a local subset of training samples from which a good local model
can be computed for reconstructing a given input test sample, where we take
into account the underlying geometry of the data. The first algorithm, called
Adaptive Geometry-driven Nearest Neighbor search (AGNN), is an adaptive scheme
which can be seen as an out-of-sample extension of the replicator graph
clustering method for local model learning. The second method, called
Geometry-driven Overlapping Clusters (GOC), is a less complex nonadaptive
alternative for training subset selection. The proposed AGNN and GOC methods
are evaluated in image super-resolution, deblurring and denoising applications
and shown to outperform spectral clustering, soft clustering, and geodesic
distance based subset selection in most settings.Comment: 15 pages, 10 figures and 5 table
A Graph-Based Semi-Supervised k Nearest-Neighbor Method for Nonlinear Manifold Distributed Data Classification
Nearest Neighbors (NN) is one of the most widely used supervised
learning algorithms to classify Gaussian distributed data, but it does not
achieve good results when it is applied to nonlinear manifold distributed data,
especially when a very limited amount of labeled samples are available. In this
paper, we propose a new graph-based NN algorithm which can effectively
handle both Gaussian distributed data and nonlinear manifold distributed data.
To achieve this goal, we first propose a constrained Tired Random Walk (TRW) by
constructing an -level nearest-neighbor strengthened tree over the graph,
and then compute a TRW matrix for similarity measurement purposes. After this,
the nearest neighbors are identified according to the TRW matrix and the class
label of a query point is determined by the sum of all the TRW weights of its
nearest neighbors. To deal with online situations, we also propose a new
algorithm to handle sequential samples based a local neighborhood
reconstruction. Comparison experiments are conducted on both synthetic data
sets and real-world data sets to demonstrate the validity of the proposed new
NN algorithm and its improvements to other version of NN algorithms.
Given the widespread appearance of manifold structures in real-world problems
and the popularity of the traditional NN algorithm, the proposed manifold
version NN shows promising potential for classifying manifold-distributed
data.Comment: 32 pages, 12 figures, 7 table
Sparse Coding on Symmetric Positive Definite Manifolds using Bregman Divergences
This paper introduces sparse coding and dictionary learning for Symmetric
Positive Definite (SPD) matrices, which are often used in machine learning,
computer vision and related areas. Unlike traditional sparse coding schemes
that work in vector spaces, in this paper we discuss how SPD matrices can be
described by sparse combination of dictionary atoms, where the atoms are also
SPD matrices. We propose to seek sparse coding by embedding the space of SPD
matrices into Hilbert spaces through two types of Bregman matrix divergences.
This not only leads to an efficient way of performing sparse coding, but also
an online and iterative scheme for dictionary learning. We apply the proposed
methods to several computer vision tasks where images are represented by region
covariance matrices. Our proposed algorithms outperform state-of-the-art
methods on a wide range of classification tasks, including face recognition,
action recognition, material classification and texture categorization
DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling
This paper explores a fully unsupervised deep learning approach for computing
distance-preserving maps that generate low-dimensional embeddings for a certain
class of manifolds. We use the Siamese configuration to train a neural network
to solve the problem of least squares multidimensional scaling for generating
maps that approximately preserve geodesic distances. By training with only a
few landmarks, we show a significantly improved local and nonlocal
generalization of the isometric mapping as compared to analogous non-parametric
counterparts. Importantly, the combination of a deep-learning framework with a
multidimensional scaling objective enables a numerical analysis of network
architectures to aid in understanding their representation power. This provides
a geometric perspective to the generalizability of deep learning.Comment: 10 pages, 11 Figure
Improving Sparse Representation-Based Classification Using Local Principal Component Analysis
Sparse representation-based classification (SRC), proposed by Wright et al.,
seeks the sparsest decomposition of a test sample over the dictionary of
training samples, with classification to the most-contributing class. Because
it assumes test samples can be written as linear combinations of their
same-class training samples, the success of SRC depends on the size and
representativeness of the training set. Our proposed classification algorithm
enlarges the training set by using local principal component analysis to
approximate the basis vectors of the tangent hyperplane of the class manifold
at each training sample. The dictionary in SRC is replaced by a local
dictionary that adapts to the test sample and includes training samples and
their corresponding tangent basis vectors. We use a synthetic data set and
three face databases to demonstrate that this method can achieve higher
classification accuracy than SRC in cases of sparse sampling, nonlinear class
manifolds, and stringent dimension reduction.Comment: Published in "Computational Intelligence for Pattern Recognition,"
editors Shyi-Ming Chen and Witold Pedrycz. The original publication is
available at http://www.springerlink.co
- …