12 research outputs found

    Logistic Regression Based on Statistical Learning Model with Linearized Kernel for Classification

    Get PDF
    In this paper, we propose a logistic regression classification method based on the integration of a statistical learning model with linearized kernel pre-processing. The single Gaussian kernel and fusion of Gaussian and cosine kernels are adopted for linearized kernel pre-processing respectively. The adopted statistical learning models are the generalized linear model and the generalized additive model. Using a generalized linear model, the elastic net regularization is adopted to explore the grouping effect of the linearized kernel feature space. Using a generalized additive model, an overlap group-lasso penalty is used to fit the sparse generalized additive functions within the linearized kernel feature space. Experiment results on the Extended Yale-B face database and AR face database demonstrate the effectiveness of the proposed method. The improved solution is also efficiently obtained using our method on the classification of spectra data

    Reduced Kernel Dictionary Learning

    Full text link
    In this paper we present new algorithms for training reduced-size nonlinear representations in the Kernel Dictionary Learning (KDL) problem. Standard KDL has the drawback of a large size of the kernel matrix when the data set is large. There are several ways of reducing the kernel size, notably Nystr\"om sampling. We propose here a method more in the spirit of dictionary learning, where the kernel vectors are obtained with a trained sparse representation of the input signals. Moreover, we optimize directly the kernel vectors in the KDL process, using gradient descent steps. We show with three data sets that our algorithms are able to provide better representations, despite using a small number of kernel vectors, and also decrease the execution time with respect to KDL

    DESIGN OF COMPACT AND DISCRIMINATIVE DICTIONARIES

    Get PDF
    The objective of this research work is to design compact and discriminative dictionaries for e�ective classi�cation. The motivation stems from the fact that dictionaries inherently contain redundant dictionary atoms. This is because the aim of dictionary learning is reconstruction, not classi�cation. In this thesis, we propose methods to obtain minimum number discriminative dictionary atoms for e�ective classi�cation and also reduced computational time. First, we propose a classi�cation scheme where an example is assigned to a class based on the weight assigned to both maximum projection and minimum reconstruction error. Here, the input data is learned by K-SVD dictionary learning which alternates between sparse coding and dictionary update. For sparse coding, orthogonal matching pursuit (OMP) is used and for dictionary update, singular value decomposition is used. This way of classi�cation though e�ective, still there is a scope to improve dictionary learning by removing redundant atoms because our goal is not reconstruction. In order to remove such redundant atoms, we propose two approaches based on information theory to obtain compact discriminative dictionaries. In the �rst approach, we remove redundant atoms from the dictionary while maintaining discriminative information. Speci�cally, we propose a constraint optimization problem which minimizes the mutual information between optimized dictionary and initial dictionary while maximizing mutual information between class labels and optimized dictionary. This helps to determine information loss between before and after the dictionary optimization. To compute information loss, we use Jensen-Shannon diver- gence with adaptive weights to compare class distributions of each dictionary atom. The advantage of Jensen-Shannon divergence is its computational e�ciency rather than calculating information loss from mutual information

    Kernel recursive least squares dictionary learning algorithm

    Get PDF
    An online dictionary learning algorithm for kernel sparse representation is developed in the current paper. In this framework, the input signal nonlinearly mapped into the feature space is sparsely represented based on a virtual dictionary in the same space. At any instant, the dictionary is updated in two steps. In the first step, the input signal samples are sparsely represented in the feature space, using the dictionary that has been updated based on the previous data. In the second step, the dictionary is updated. In this paper, a novel recursive dictionary update algorithm is derived, based on the recursive least squares (RLS) approach. This algorithm gradually updates the dictionary, upon receiving one or a mini-batch of training samples. An efficient implementation of the algorithm is also formulated. Experimental results over four datasets in different fields show the superior performance of the proposed algorithm in comparison with its counterparts. In particular, the classification accuracy obtained by the dictionaries trained using the proposed algorithm gradually approaches that of the dictionaries trained in batch mode. Moreover, in spite of lower computational complexity, the proposed algorithm overdoes all existing online kernel dictionary learning algorithms.acceptedVersio

    Kernel PCA with the Nyström method

    Get PDF
    Kernel methods are powerful but computationally demanding techniques for non-linear learning. A popular remedy, the Nyström method has been shown to be able to scale up kernel methods to very large datasets with little loss in accuracy. However, kernel PCA with the Nyström method has not been widely studied. In this paper we derive kernel PCA with the Nyström method and study its accuracy, providing a finite-sample confidence bound on the difference between the Nyström and standard empirical reconstruction errors. The behaviours of the method and bound are illustrated through extensive computer experiments on real-world data. As an application of the method we present kernel principal component regression with the Nyström method

    Self-taught semi-supervised dictionary learning with non-negative constraint

    Get PDF
    This paper investigates classification by dictionary learning. A novel unified framework termed self-taught semisupervised dictionary learning with non-negative constraint (NNST-SSDL) is proposed for simultaneously optimizing the components of a dictionary and a graph Laplacian. Specifically, an atom graph Laplacian regularization is built by using sparse coefficients to effectively capture the underlying manifold structure. It is more robust to noisy samples and outliers because atoms are more concise and representative than training samples. A non-negative constraint imposed on the sparse coefficients guarantees that each sample is in the middle of its related atoms. In this way the dependency between samples and atoms is made explicit. Furthermore, a self-taught mechanism is introduced to effectively feed back the manifold structure induced by atom graph Laplacian regularization and the supervised information hidden in unlabeled samples in order to learn a better dictionary. An efficient algorithm, combining a block coordinate descent method with the alternating direction method of multipliers is derived to optimize the unified framework. Experimental results on several benchmark datasets show the effectiveness of the proposed model
    corecore