162,323 research outputs found
Low-rank matrix recovery with structural incoherence for robust face recognition
We address the problem of robust face recognition, in which both training and test image data might be corrupted due to occlusion and disguise. From standard face recog-nition algorithms such as Eigenfaces to recently proposed sparse representation-based classification (SRC) methods, most prior works did not consider possible contamination of data during training, and thus the associated performance might be degraded. Based on the recent success of low-rank matrix recovery, we propose a novel low-rank matrix ap-proximation algorithm with structural incoherence for ro-bust face recognition. Our method not only decomposes raw training data into a set of representative basis with corre-sponding sparse errors for better modeling the face images, we further advocate the structural incoherence between the basis learned from different classes. These basis are en-couraged to be as independent as possible due to the regu-larization on structural incoherence. We show that this pro-vides additional discriminating ability to the original low-rank models for improved performance. Experimental re-sults on public face databases verify the effectiveness and robustness of our method, which is also shown to outper-form state-of-the-art SRC based approaches. 1
Locality and Structure Regularized Low Rank Representation for Hyperspectral Image Classification
Hyperspectral image (HSI) classification, which aims to assign an accurate
label for hyperspectral pixels, has drawn great interest in recent years.
Although low rank representation (LRR) has been used to classify HSI, its
ability to segment each class from the whole HSI data has not been exploited
fully yet. LRR has a good capacity to capture the underlying lowdimensional
subspaces embedded in original data. However, there are still two drawbacks for
LRR. First, LRR does not consider the local geometric structure within data,
which makes the local correlation among neighboring data easily ignored.
Second, the representation obtained by solving LRR is not discriminative enough
to separate different data. In this paper, a novel locality and structure
regularized low rank representation (LSLRR) model is proposed for HSI
classification. To overcome the above limitations, we present locality
constraint criterion (LCC) and structure preserving strategy (SPS) to improve
the classical LRR. Specifically, we introduce a new distance metric, which
combines both spatial and spectral features, to explore the local similarity of
pixels. Thus, the global and local structures of HSI data can be exploited
sufficiently. Besides, we propose a structure constraint to make the
representation have a near block-diagonal structure. This helps to determine
the final classification labels directly. Extensive experiments have been
conducted on three popular HSI datasets. And the experimental results
demonstrate that the proposed LSLRR outperforms other state-of-the-art methods.Comment: 14 pages, 7 figures, TGRS201
Fast Low-rank Representation based Spatial Pyramid Matching for Image Classification
Spatial Pyramid Matching (SPM) and its variants have achieved a lot of
success in image classification. The main difference among them is their
encoding schemes. For example, ScSPM incorporates Sparse Code (SC) instead of
Vector Quantization (VQ) into the framework of SPM. Although the methods
achieve a higher recognition rate than the traditional SPM, they consume more
time to encode the local descriptors extracted from the image. In this paper,
we propose using Low Rank Representation (LRR) to encode the descriptors under
the framework of SPM. Different from SC, LRR considers the group effect among
data points instead of sparsity. Benefiting from this property, the proposed
method (i.e., LrrSPM) can offer a better performance. To further improve the
generalizability and robustness, we reformulate the rank-minimization problem
as a truncated projection problem. Extensive experimental studies show that
LrrSPM is more efficient than its counterparts (e.g., ScSPM) while achieving
competitive recognition rates on nine image data sets.Comment: accepted into knowledge based systems, 201
ForestHash: Semantic Hashing With Shallow Random Forests and Tiny Convolutional Networks
Hash codes are efficient data representations for coping with the ever
growing amounts of data. In this paper, we introduce a random forest semantic
hashing scheme that embeds tiny convolutional neural networks (CNN) into
shallow random forests, with near-optimal information-theoretic code
aggregation among trees. We start with a simple hashing scheme, where random
trees in a forest act as hashing functions by setting `1' for the visited tree
leaf, and `0' for the rest. We show that traditional random forests fail to
generate hashes that preserve the underlying similarity between the trees,
rendering the random forests approach to hashing challenging. To address this,
we propose to first randomly group arriving classes at each tree split node
into two groups, obtaining a significantly simplified two-class classification
problem, which can be handled using a light-weight CNN weak learner. Such
random class grouping scheme enables code uniqueness by enforcing each class to
share its code with different classes in different trees. A non-conventional
low-rank loss is further adopted for the CNN weak learners to encourage code
consistency by minimizing intra-class variations and maximizing inter-class
distance for the two random class groups. Finally, we introduce an
information-theoretic approach for aggregating codes of individual trees into a
single hash code, producing a near-optimal unique hash for each class. The
proposed approach significantly outperforms state-of-the-art hashing methods
for image retrieval tasks on large-scale public datasets, while performing at
the level of other state-of-the-art image classification techniques while
utilizing a more compact and efficient scalable representation. This work
proposes a principled and robust procedure to train and deploy in parallel an
ensemble of light-weight CNNs, instead of simply going deeper.Comment: Accepted to ECCV 201
- …