76,577 research outputs found
Structured Analysis Dictionary Learning for Image Classification
We propose a computationally efficient and high-performance classification
algorithm by incorporating class structural information in analysis dictionary
learning. To achieve more consistent classification, we associate a class
characteristic structure of independent subspaces and impose it on the
classification error constrained analysis dictionary learning. Experiments
demonstrate that our method achieves a comparable or better performance than
the state-of-the-art algorithms in a variety of visual classification tasks. In
addition, our method greatly reduces the training and testing computational
complexity.Comment: This is the final version accepted by ICASSP 201
Jointly Learning Structured Analysis Discriminative Dictionary and Analysis Multiclass Classifier
In this paper, we propose an analysis mechanism based structured Analysis
Discriminative Dictionary Learning (ADDL) framework. ADDL seamlessly integrates
the analysis discriminative dictionary learning, analysis representation and
analysis classifier training into a unified model. The applied analysis
mechanism can make sure that the learnt dictionaries, representations and
linear classifiers over different classes are independent and discriminating as
much as possible. The dictionary is obtained by minimizing a reconstruction
error and an analytical incoherence promoting term that encourages the
sub-dictionaries associated with different classes to be independent. To obtain
the representation coefficients, ADDL imposes a sparse l2,1-norm constraint on
the coding coefficients instead of using l0 or l1-norm, since the l0 or l1-norm
constraint applied in most existing DL criteria makes the training phase time
consuming. The codes-extraction projection that bridges data with the sparse
codes by extracting special features from the given samples is calculated via
minimizing a sparse codes approximation term. Then we compute a linear
classifier based on the approximated sparse codes by an analysis mechanism to
simultaneously consider the classification and representation powers. Thus, the
classification approach of our model is very efficient, because it can avoid
the extra time-consuming sparse reconstruction process with trained dictionary
for each new test data as most existing DL algorithms. Simulations on real
image databases demonstrate that our ADDL model can obtain superior performance
over other state-of-the-arts.Comment: Accepted by IEEE TNNL
Learning Discriminative Multilevel Structured Dictionaries for Supervised Image Classification
Sparse representations using overcomplete dictionaries have proved to be a
powerful tool in many signal processing applications such as denoising,
super-resolution, inpainting, compression or classification. The sparsity of
the representation very much depends on how well the dictionary is adapted to
the data at hand. In this paper, we propose a method for learning structured
multilevel dictionaries with discriminative constraints to make them well
suited for the supervised pixelwise classification of images. A multilevel
tree-structured discriminative dictionary is learnt for each class, with a
learning objective concerning the reconstruction errors of the image patches
around the pixels over each class-representative dictionary. After the initial
assignment of the class labels to image pixels based on their sparse
representations over the learnt dictionaries, the final classification is
achieved by smoothing the label image with a graph cut method and an erosion
method. Applied to a common set of texture images, our supervised
classification method shows competitive results with the state of the art
A multi-class structured dictionary learning method using discriminant atom selection
In the last decade, traditional dictionary learning methods have been
successfully applied to various pattern classification tasks. Although these
methods produce sparse representations of signals which are robust against
distortions and missing data, such representations quite often turn out to be
unsuitable if the final objective is signal classification. In order to
overcome or at least to attenuate such a weakness, several new methods which
incorporate discriminative information into sparse-inducing models have emerged
in recent years. In particular, methods for discriminative dictionary learning
have shown to be more accurate (in terms of signal classification) than the
traditional ones, which are only focused on minimizing the total representation
error. In this work, we present both a novel multi-class discriminative measure
and an innovative dictionary learning method. For a given dictionary, this new
measure, which takes into account not only when a particular atom is used for
representing signals coming from a certain class and the magnitude of its
corresponding representation coefficient, but also the effect that such an atom
has in the total representation error, is capable of efficiently quantifying
the degree of discriminability of each one of the atoms. On the other hand, the
new dictionary construction method yields dictionaries which are highly
suitable for multi-class classification tasks. Our method was tested with a
widely used database for handwritten digit recognition and compared with three
state-of-the-art classification methods. The results show that our method
significantly outperforms the other three achieving good recognition rates and
additionally, reducing the computational cost of the classifier.Comment: 18 pages, 8 figures and 3 table
Structured Dictionary Learning for Classification
Sparsity driven signal processing has gained tremendous popularity in the
last decade. At its core, the assumption is that the signal of interest is
sparse with respect to either a fixed transformation or a signal dependent
dictionary. To better capture the data characteristics, various dictionary
learning methods have been proposed for both reconstruction and classification
tasks. For classification particularly, most approaches proposed so far have
focused on designing explicit constraints on the sparse code to improve
classification accuracy while simply adopting -norm or -norm for
sparsity regularization. Motivated by the success of structured sparsity in the
area of Compressed Sensing, we propose a structured dictionary learning
framework (StructDL) that incorporates the structure information on both group
and task levels in the learning process. Its benefits are two-fold: (i) the
label consistency between dictionary atoms and training data are implicitly
enforced; and (ii) the classification performance is more robust in the cases
of a small dictionary size or limited training data than other techniques.
Using the subspace model, we derive the conditions for StructDL to guarantee
the performance and show theoretically that StructDL is superior to -norm
or -norm regularized dictionary learning for classification. Extensive
experiments have been performed on both synthetic simulations and real world
applications, such as face recognition and object classification, to
demonstrate the validity of the proposed DL framework
Scalable Block-Diagonal Locality-Constrained Projective Dictionary Learning
We propose a novel structured discriminative block-diagonal dictionary
learning method, referred to as scalable Locality-Constrained Projective
Dictionary Learning (LC-PDL), for efficient representation and classification.
To improve the scalability by saving both training and testing time, our LC-PDL
aims at learning a structured discriminative dictionary and a block-diagonal
representation without using costly l0/l1-norm. Besides, it avoids extra
time-consuming sparse reconstruction process with the well-trained dictionary
for new sample as many existing models. More importantly, LC-PDL avoids using
the complementary data matrix to learn the sub-dictionary over each class. To
enhance the performance, we incorporate a locality constraint of atoms into the
DL procedures to keep local information and obtain the codes of samples over
each class separately. A block-diagonal discriminative approximation term is
also derived to learn a discriminative projection to bridge data with their
codes by extracting the special block-diagonal features from data, which can
ensure the approximate coefficients to associate with its label information
clearly. Then, a robust multiclass classifier is trained over extracted
block-diagonal codes for accurate label predictions. Experimental results
verify the effectiveness of our algorithm.Comment: Accepted at the 28th International Joint Conference on Artificial
Intelligence(IJCAI 2019
Correlation and Class Based Block Formation for Improved Structured Dictionary Learning
In recent years, the creation of block-structured dictionary has attracted a
lot of interest. Learning such dictionaries involve two step process: block
formation and dictionary update. Both these steps are important in producing an
effective dictionary. The existing works mostly assume that the block structure
is known a priori while learning the dictionary. For finding the unknown block
structure given a dictionary commonly sparse agglomerative clustering (SAC) is
used. It groups atoms based on their consistency in sparse coding with respect
to the unstructured dictionary. This paper explores two innovations towards
improving the reconstruction as well as the classification ability achieved
with the block-structured dictionary. First, we propose a novel block
structuring approach that makes use of the correlation among dictionary atoms.
Unlike the SAC approach, which groups diverse atoms, in the proposed approach
the blocks are formed by grouping the top most correlated atoms in the
dictionary. The proposed block clustering approach is noted to yield
significant reductions in redundancy as well as provides a direct control on
the block size when compared with the existing SAC-based block structuring.
Later, motivated by works using supervised \emph{a priori} known block
structure, we also explore the incorporation of class information in the
proposed block formation approach to further enhance the classification ability
of the block dictionary. For assessment of the reconstruction ability with
proposed innovations is done on synthetic data while the classification ability
has been evaluated in large variability speaker verification task.Comment: 9 pages, Submitted to IEEE Transactions on Signal Processin
Spatial-Aware Dictionary Learning for Hyperspectral Image Classification
This paper presents a structured dictionary-based model for hyperspectral
data that incorporates both spectral and contextual characteristics of a
spectral sample, with the goal of hyperspectral image classification. The idea
is to partition the pixels of a hyperspectral image into a number of spatial
neighborhoods called contextual groups and to model each pixel with a linear
combination of a few dictionary elements learned from the data. Since pixels
inside a contextual group are often made up of the same materials, their linear
combinations are constrained to use common elements from the dictionary. To
this end, dictionary learning is carried out with a joint sparse regularizer to
induce a common sparsity pattern in the sparse coefficients of each contextual
group. The sparse coefficients are then used for classification using a linear
SVM. Experimental results on a number of real hyperspectral images confirm the
effectiveness of the proposed representation for hyperspectral image
classification. Moreover, experiments with simulated multispectral data show
that the proposed model is capable of finding representations that may
effectively be used for classification of multispectral-resolution samples.Comment: 16 pages, 9 figure
Discriminative Nonlinear Analysis Operator Learning: When Cosparse Model Meets Image Classification
Linear synthesis model based dictionary learning framework has achieved
remarkable performances in image classification in the last decade. Behaved as
a generative feature model, it however suffers from some intrinsic
deficiencies. In this paper, we propose a novel parametric nonlinear analysis
cosparse model (NACM) with which a unique feature vector will be much more
efficiently extracted. Additionally, we derive a deep insight to demonstrate
that NACM is capable of simultaneously learning the task adapted feature
transformation and regularization to encode our preferences, domain prior
knowledge and task oriented supervised information into the features. The
proposed NACM is devoted to the classification task as a discriminative feature
model and yield a novel discriminative nonlinear analysis operator learning
framework (DNAOL). The theoretical analysis and experimental performances
clearly demonstrate that DNAOL will not only achieve the better or at least
competitive classification accuracies than the state-of-the-art algorithms but
it can also dramatically reduce the time complexities in both training and
testing phases.Comment: IEEE TIP Accepte
Cross-label Suppression: A Discriminative and Fast Dictionary Learning with Group Regularization
This paper addresses image classification through learning a compact and
discriminative dictionary efficiently. Given a structured dictionary with each
atom (columns in the dictionary matrix) related to some label, we propose
cross-label suppression constraint to enlarge the difference among
representations for different classes. Meanwhile, we introduce group
regularization to enforce representations to preserve label properties of
original samples, meaning the representations for the same class are encouraged
to be similar. Upon the cross-label suppression, we don't resort to
frequently-used -norm or -norm for coding, and obtain
computational efficiency without losing the discriminative power for
categorization. Moreover, two simple classification schemes are also developed
to take full advantage of the learnt dictionary. Extensive experiments on six
data sets including face recognition, object categorization, scene
classification, texture recognition and sport action categorization are
conducted, and the results show that the proposed approach can outperform lots
of recently presented dictionary algorithms on both recognition accuracy and
computational efficiency.Comment: 36 pages, 12 figures, 11 table
- …