518 research outputs found

    Cross-label Suppression: A Discriminative and Fast Dictionary Learning with Group Regularization

    Full text link
    This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose cross-label suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used â„“0\ell_0-norm or â„“1\ell_1-norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.Comment: 36 pages, 12 figures, 11 table

    Learning Discriminative Multilevel Structured Dictionaries for Supervised Image Classification

    Full text link
    Sparse representations using overcomplete dictionaries have proved to be a powerful tool in many signal processing applications such as denoising, super-resolution, inpainting, compression or classification. The sparsity of the representation very much depends on how well the dictionary is adapted to the data at hand. In this paper, we propose a method for learning structured multilevel dictionaries with discriminative constraints to make them well suited for the supervised pixelwise classification of images. A multilevel tree-structured discriminative dictionary is learnt for each class, with a learning objective concerning the reconstruction errors of the image patches around the pixels over each class-representative dictionary. After the initial assignment of the class labels to image pixels based on their sparse representations over the learnt dictionaries, the final classification is achieved by smoothing the label image with a graph cut method and an erosion method. Applied to a common set of texture images, our supervised classification method shows competitive results with the state of the art

    Robust Sparse Coding via Self-Paced Learning

    Full text link
    Sparse coding (SC) is attracting more and more attention due to its comprehensive theoretical studies and its excellent performance in many signal processing applications. However, most existing sparse coding algorithms are nonconvex and are thus prone to becoming stuck into bad local minima, especially when there are outliers and noisy data. To enhance the learning robustness, in this paper, we propose a unified framework named Self-Paced Sparse Coding (SPSC), which gradually include matrix elements into SC learning from easy to complex. We also generalize the self-paced learning schema into different levels of dynamic selection on samples, features and elements respectively. Experimental results on real-world data demonstrate the efficacy of the proposed algorithms.Comment: submitted to AAAI201

    Semi-supervised dual graph regularized dictionary learning

    Full text link
    In this paper, we propose a semi-supervised dictionary learning method that uses both the information in labelled and unlabelled data and jointly trains a linear classifier embedded on the sparse codes. The manifold structure of the data in the sparse code space is preserved using the same approach as the Locally Linear Embedding method (LLE). This enables one to enforce the predictive power of the unlabelled data sparse codes. We show that our approach provides significant improvements over other methods. The results can be further improved by training a simple nonlinear classifier as SVM on the sparse codes.Comment: in Proceedings of iTWIST'18, Paper-ID: 33, Marseille, France, November, 21-23, 201

    Multi-View Task-Driven Recognition in Visual Sensor Networks

    Full text link
    Nowadays, distributed smart cameras are deployed for a wide set of tasks in several application scenarios, ranging from object recognition, image retrieval, and forensic applications. Due to limited bandwidth in distributed systems, efficient coding of local visual features has in fact been an active topic of research. In this paper, we propose a novel approach to obtain a compact representation of high-dimensional visual data using sensor fusion techniques. We convert the problem of visual analysis in resource-limited scenarios to a multi-view representation learning, and we show that the key to finding properly compressed representation is to exploit the position of cameras with respect to each other as a norm-based regularization in the particular signal representation of sparse coding. Learning the representation of each camera is viewed as an individual task and a multi-task learning with joint sparsity for all nodes is employed. The proposed representation learning scheme is referred to as the multi-view task-driven learning for visual sensor network (MT-VSN). We demonstrate that MT-VSN outperforms state-of-the-art in various surveillance recognition tasks.Comment: 5 pages, Accepted in International Conference of Image Processing, 201

    Scalable Block-Diagonal Locality-Constrained Projective Dictionary Learning

    Full text link
    We propose a novel structured discriminative block-diagonal dictionary learning method, referred to as scalable Locality-Constrained Projective Dictionary Learning (LC-PDL), for efficient representation and classification. To improve the scalability by saving both training and testing time, our LC-PDL aims at learning a structured discriminative dictionary and a block-diagonal representation without using costly l0/l1-norm. Besides, it avoids extra time-consuming sparse reconstruction process with the well-trained dictionary for new sample as many existing models. More importantly, LC-PDL avoids using the complementary data matrix to learn the sub-dictionary over each class. To enhance the performance, we incorporate a locality constraint of atoms into the DL procedures to keep local information and obtain the codes of samples over each class separately. A block-diagonal discriminative approximation term is also derived to learn a discriminative projection to bridge data with their codes by extracting the special block-diagonal features from data, which can ensure the approximate coefficients to associate with its label information clearly. Then, a robust multiclass classifier is trained over extracted block-diagonal codes for accurate label predictions. Experimental results verify the effectiveness of our algorithm.Comment: Accepted at the 28th International Joint Conference on Artificial Intelligence(IJCAI 2019

    Jointly Learning Structured Analysis Discriminative Dictionary and Analysis Multiclass Classifier

    Full text link
    In this paper, we propose an analysis mechanism based structured Analysis Discriminative Dictionary Learning (ADDL) framework. ADDL seamlessly integrates the analysis discriminative dictionary learning, analysis representation and analysis classifier training into a unified model. The applied analysis mechanism can make sure that the learnt dictionaries, representations and linear classifiers over different classes are independent and discriminating as much as possible. The dictionary is obtained by minimizing a reconstruction error and an analytical incoherence promoting term that encourages the sub-dictionaries associated with different classes to be independent. To obtain the representation coefficients, ADDL imposes a sparse l2,1-norm constraint on the coding coefficients instead of using l0 or l1-norm, since the l0 or l1-norm constraint applied in most existing DL criteria makes the training phase time consuming. The codes-extraction projection that bridges data with the sparse codes by extracting special features from the given samples is calculated via minimizing a sparse codes approximation term. Then we compute a linear classifier based on the approximated sparse codes by an analysis mechanism to simultaneously consider the classification and representation powers. Thus, the classification approach of our model is very efficient, because it can avoid the extra time-consuming sparse reconstruction process with trained dictionary for each new test data as most existing DL algorithms. Simulations on real image databases demonstrate that our ADDL model can obtain superior performance over other state-of-the-arts.Comment: Accepted by IEEE TNNL

    Spatial-Aware Dictionary Learning for Hyperspectral Image Classification

    Full text link
    This paper presents a structured dictionary-based model for hyperspectral data that incorporates both spectral and contextual characteristics of a spectral sample, with the goal of hyperspectral image classification. The idea is to partition the pixels of a hyperspectral image into a number of spatial neighborhoods called contextual groups and to model each pixel with a linear combination of a few dictionary elements learned from the data. Since pixels inside a contextual group are often made up of the same materials, their linear combinations are constrained to use common elements from the dictionary. To this end, dictionary learning is carried out with a joint sparse regularizer to induce a common sparsity pattern in the sparse coefficients of each contextual group. The sparse coefficients are then used for classification using a linear SVM. Experimental results on a number of real hyperspectral images confirm the effectiveness of the proposed representation for hyperspectral image classification. Moreover, experiments with simulated multispectral data show that the proposed model is capable of finding representations that may effectively be used for classification of multispectral-resolution samples.Comment: 16 pages, 9 figure

    Contextual Explanation Networks

    Full text link
    Modern learning algorithms excel at producing accurate but complex models of the data. However, deploying such models in the real-world requires extra care: we must ensure their reliability, robustness, and absence of undesired biases. This motivates the development of models that are equally accurate but can be also easily inspected and assessed beyond their predictive performance. To this end, we introduce contextual explanation networks (CEN)---a class of architectures that learn to predict by generating and utilizing intermediate, simplified probabilistic models. Specifically, CENs generate parameters for intermediate graphical models which are further used for prediction and play the role of explanations. Contrary to the existing post-hoc model-explanation tools, CENs learn to predict and to explain simultaneously. Our approach offers two major advantages: (i) for each prediction valid, instance-specific explanation is generated with no computational overhead and (ii) prediction via explanation acts as a regularizer and boosts performance in data-scarce settings. We analyze the proposed framework theoretically and experimentally. Our results on image and text classification and survival analysis tasks demonstrate that CENs are not only competitive with the state-of-the-art methods but also offer additional insights behind each prediction, that can be valuable for decision support. We also show that while post-hoc methods may produce misleading explanations in certain cases, CENs are consistent and allow to detect such cases systematically.Comment: 48 pages, 18 figures, to appear in JML

    Semi-supervised Sparse Representation with Graph Regularization for Image Classification

    Full text link
    Image classification is a challenging problem for computer in reality. Large numbers of methods can achieve satisfying performances with sufficient labeled images. However, labeled images are still highly limited for certain image classification tasks. Instead, lots of unlabeled images are available and easy to be obtained. Therefore, making full use of the available unlabeled data can be a potential way to further improve the performance of current image classification methods. In this paper, we propose a discriminative semi-supervised sparse representation algorithm for image classification. In the algorithm, the classification process is combined with the sparse coding to learn a data-driven linear classifier. To obtain discriminative predictions, the predicted labels are regularized with three graphs, i.e., the global manifold structure graph, the within-class graph and the between-classes graph. The constructed graphs are able to extract structure information included in both the labeled and unlabeled data. Moreover, the proposed method is extended to a kernel version for dealing with data that cannot be linearly classified. Accordingly, efficient algorithms are developed to solve the corresponding optimization problems. Experimental results on several challenging databases demonstrate that the proposed algorithm achieves excellent performances compared with related popular methods
    • …
    corecore