161 research outputs found

    Second-order Temporal Pooling for Action Recognition

    Full text link
    Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics. Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.Comment: Accepted in the International Journal of Computer Vision (IJCV

    Learning compact hashing codes with complex objectives from multiple sources for large scale similarity search

    Get PDF
    Similarity search is a key problem in many real world applications including image and text retrieval, content reuse detection and collaborative filtering. The purpose of similarity search is to identify similar data examples given a query example. Due to the explosive growth of the Internet, a huge amount of data such as texts, images and videos has been generated, which indicates that efficient large scale similarity search becomes more important.^ Hashing methods have become popular for large scale similarity search due to their computational and memory efficiency. These hashing methods design compact binary codes to represent data examples so that similar examples are mapped into similar codes. This dissertation addresses five major problems for utilizing supervised information from multiple sources in hashing with respect to different objectives. Firstly, we address the problem of incorporating semantic tags by modeling the latent correlations between tags and data examples. More precisely, the hashing codes are learned in a unified semi-supervised framework by simultaneously preserving the similarities between data examples and ensuring the tag consistency via a latent factor model. Secondly, we solve the missing data problem by latent subspace learning from multiple sources. The hashing codes are learned by enforcing the data consistency among different sources. Thirdly, we address the problem of hashing on structured data by graph learning. A weighted graph is constructed based on the structured knowledge from the data. The hashing codes are then learned by preserving the graph similarities. Fourthly, we address the problem of learning high ranking quality hashing codes by utilizing the relevance judgments from users. The hashing code/function is learned via optimizing a commonly used non-smooth non-convex ranking measure, NDCG. Finally, we deal with the problem of insufficient supervision by active learning. We propose to actively select the most informative data examples and tags in a joint manner based on the selection criteria that both the data examples and tags should be most uncertain and dissimilar with each other.^ Extensive experiments on several large scale datasets demonstrate the superior performance of the proposed approaches over several state-of-the-art hashing methods from different perspectives

    Learning Robust and Discriminative Manifold Representations for Pattern Recognition

    Get PDF
    Face and object recognition find applications in domains such as biometrics, surveillance and human computer interaction. An important component in any recognition pipeline is to learn pertinent image representations that will help the system to discriminate one image class from another. These representations enable the system to learn a discriminative function that can classify a wide range of images. In practical situations, the images acquired are often corrupted with occlusions and noise. Thus, a robust and discriminative learning is necessary for good classification performance. This thesis explores two scenarios where robust and discriminative manifold representations help recognize face and object images. On one hand learning robust manifold projections enables the system to adapt to images across different domains including cases with noise and occlusions. And on the other hand learning discriminative manifold representations aid in image set comparison. The first contribution of this thesis is a robust approach to visual domain adaptation by learning a subspace with L1 principal component analysis (PCA) and L1 Grassmannian with applications to object and face recognition. Mapping data from different domains on a low dimensional subspace through PCA is a common step in subspace based unsupervised domain adaptation. Subspaces extracted by PCA are prone to be affected by outliers that lead to noisy projections. A robust subspace learning through L1-PCA helps in improving performance. The proposed approach was tested on the office, Caltech - 256, Yale-A and AT&T datasets. Results indicate the improvement of classification accuracy for face and object recognition task. The second contribution of this thesis is a biologically motivated manifold learning framework for image set classification by independent component analysis (ICA) for Grassmann manifolds. It has been discovered that the simple cells in the visual cortex learn spatially localized image representations. Similar representations can be learnt using ICA. Motivated by the manifold hypothesis, a Grassmann manifold is learnt using the independent components which enables compact representation through linear subspaces. The efficacy of the proposed approach is demonstrated for image set classification on face and object recognition datasets such as AT&T, extended Yale, labelled faces in the wild and ETH - 80

    Connectionist-Symbolic Machine Intelligence using Cellular Automata based Reservoir-Hyperdimensional Computing

    Full text link
    We introduce a novel framework of reservoir computing, that is capable of both connectionist machine intelligence and symbolic computation. Cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is capable of long short-term memory and it requires orders of magnitude less computation compared to Echo State Networks. We prove that cellular automaton reservoir holds a distributed representation of attribute statistics, which provides a more effective computation than local representation. It is possible to estimate the kernel for linear cellular automata via metric learning, that enables a much more efficient distance computation in support vector machine framework. Also, binary reservoir feature vectors can be combined using Boolean operations as in hyperdimensional computing, paving a direct way for concept building and symbolic processing.Comment: Corrected Typos. Responded some comments on section 8. Added appendix for details. Recurrent architecture emphasize

    Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification Models

    Get PDF
    This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.:Symbols and Abbreviations 1 Introduction 1.1 Motivation and Problem Description . . . . . . . . . . . . . . . . . 1 1.2 Utilized Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Prototype Based Methods 19 2.1 Unsupervised Vector Quantization . . . . . . . . . . . . . . . . . . 22 2.1.1 C-means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.1.2 Self-Organizing Map . . . . . . . . . . . . . . . . . . . . . . 25 2.1.3 Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.1.4 Common Generalizations . . . . . . . . . . . . . . . . . . . 30 2.2 Supervised Vector Quantization . . . . . . . . . . . . . . . . . . . . 35 2.2.1 The Family of Learning Vector Quantizers - LVQ . . . . . . 36 2.2.2 Generalized Learning Vector Quantization . . . . . . . . . 38 2.3 Semi-Supervised Vector Quantization . . . . . . . . . . . . . . . . 42 2.3.1 Learning Associations by Self-Organization . . . . . . . . . 42 2.3.2 Fuzzy Labeled Self-Organizing Map . . . . . . . . . . . . . 43 2.3.3 Fuzzy Labeled Neural Gas . . . . . . . . . . . . . . . . . . 45 2.4 Dissimilarity Measures . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.4.1 Differentiable Kernels in Generalized LVQ . . . . . . . . . 52 2.4.2 Dissimilarity Adaptation for Performance Improvement . 56 3 Deeper Insights into Classification Problems - From the Perspective of Generalized LVQ- 81 3.1 Classification Models . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.2 The Classification Task . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.3 Evaluation of Classification Results . . . . . . . . . . . . . . . . . . 88 3.4 The Classification Task as an Ill-Posed Problem . . . . . . . . . . . 92 4 Auxiliary Structure Information and Appropriate Dissimilarity Adaptation in Prototype Based Methods 93 4.1 Supervised Vector Quantization for Functional Data . . . . . . . . 93 4.1.1 Functional Relevance/Matrix LVQ . . . . . . . . . . . . . . 95 4.1.2 Enhancement Generalized Relevance/Matrix LVQ . . . . 109 4.2 Fuzzy Information About the Labels . . . . . . . . . . . . . . . . . 121 4.2.1 Fuzzy Semi-Supervised Self-Organizing Maps . . . . . . . 122 4.2.2 Fuzzy Semi-Supervised Neural Gas . . . . . . . . . . . . . 123 5 Variants of Classification Costs and Class Sensitive Learning 137 5.1 Border Sensitive Learning in Generalized LVQ . . . . . . . . . . . 137 5.1.1 Border Sensitivity by Additive Penalty Function . . . . . . 138 5.1.2 Border Sensitivity by Parameterized Transfer Function . . 139 5.2 Optimizing Different Validation Measures by the Generalized LVQ 147 5.2.1 Attention Based Learning Strategy . . . . . . . . . . . . . . 148 5.2.2 Optimizing Statistical Validation Measurements for Binary Class Problems in the GLVQ . . . . . . . . . . . . . 155 5.3 Integration of Structural Knowledge about the Labeling in Fuzzy Supervised Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . 160 6 Conclusion and Future Work 165 My Publications 168 A Appendix 173 A.1 Stochastic Gradient Descent (SGD) . . . . . . . . . . . . . . . . . . 173 A.2 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . 175 A.3 Fuzzy Supervised Neural Gas Algorithm Solved by SGD . . . . . 179 Bibliography 182 Acknowledgements 20

    Image processing system based on similarity/dissimilarity measures to classify binary images from contour-based features

    Get PDF
    Image Processing Systems (IPS) try to solve tasks like image classification or segmentation based on its content. Many authors proposed a variety of techniques to tackle the image classification task. Plenty of methods address the performance of the IPS [1], as long as the influence of many external circumstances, such as illumination, rotation, and noise [2]. However, there is an increasing interest in classifying shapes from binary images (BI). Shape Classification (SC) from BI considers a segmented image as a sample (backgroundsegmentation [3]) and aims to identify objects based in its shape..

    Grassmann Learning for Recognition and Classification

    Get PDF
    Computational performance associated with high-dimensional data is a common challenge for real-world classification and recognition systems. Subspace learning has received considerable attention as a means of finding an efficient low-dimensional representation that leads to better classification and efficient processing. A Grassmann manifold is a space that promotes smooth surfaces, where points represent subspaces and the relationship between points is defined by a mapping of an orthogonal matrix. Grassmann learning involves embedding high dimensional subspaces and kernelizing the embedding onto a projection space where distance computations can be effectively performed. In this dissertation, Grassmann learning and its benefits towards action classification and face recognition in terms of accuracy and performance are investigated and evaluated. Grassmannian Sparse Representation (GSR) and Grassmannian Spectral Regression (GRASP) are proposed as Grassmann inspired subspace learning algorithms. GSR is a novel subspace learning algorithm that combines the benefits of Grassmann manifolds with sparse representations using least squares loss §¤1-norm minimization for improved classification. GRASP is a novel subspace learning algorithm that leverages the benefits of Grassmann manifolds and Spectral Regression in a framework that supports high discrimination between classes and achieves computational benefits by using manifold modeling and avoiding eigen-decomposition. The effectiveness of GSR and GRASP is demonstrated for computationally intensive classification problems: (a) multi-view action classification using the IXMAS Multi-View dataset, the i3DPost Multi-View dataset, and the WVU Multi-View dataset, (b) 3D action classification using the MSRAction3D dataset and MSRGesture3D dataset, and (c) face recognition using the ATT Face Database, Labeled Faces in the Wild (LFW), and the Extended Yale Face Database B (YALE). Additional contributions include the definition of Motion History Surfaces (MHS) and Motion Depth Surfaces (MDS) as descriptors suitable for activity representations in video sequences and 3D depth sequences. An in-depth analysis of Grassmann metrics is applied on high dimensional data with different levels of noise and data distributions which reveals that standardized Grassmann kernels are favorable over geodesic metrics on a Grassmann manifold. Finally, an extensive performance analysis is made that supports Grassmann subspace learning as an effective approach for classification and recognition
    corecore