327 research outputs found

    Effective Auto Encoder For Unsupervised Sparse Representation

    Get PDF
    High dimensionality and the sheer size of unlabeled data available today demand new development in unsupervised learning of sparse representation. Despite of recent advances in representation learning, most of the current methods are limited when dealing with large scale unlabeled data. In this study, we propose a new unsupervised method that is able to learn sparse representation from unlabeled data efficiently. We derive a closed-form solution based on the sequential minimal optimization (SMO) for training an auto encoder-decoder module, which efficiently extracts sparse and compact features from any data set with various size. The inference process in the proposed learning algorithm does not require any expensive Hessian computation for solving the underlying optimization problems. Decomposition of the non-convex optimization problem in our model enables us to solve each sub-problems analytically. Using several image datasets including CIFAR-10, CALTECH-101 and AR face database, we demonstrate the effectiveness in terms of computation time and classification accuracy. Proposed method discovers dictionaries that are able to capture low level features in larger dimensional patches in quite lower executional time than the other alternatives. Then by detailed experimental results, we present that our module outperforms various similar single layer state-of-the-art methods including Sparse Filtering and K-Means clustering method

    Effective Auto Encoder For Unsupervised Sparse Representation

    Get PDF
    High dimensionality and the sheer size of unlabeled data available today demand new development in unsupervised learning of sparse representation. Despite of recent advances in representation learning, most of the current methods are limited when dealing with large scale unlabeled data. In this study, we propose a new unsupervised method that is able to learn sparse representation from unlabeled data efficiently. We derive a closed-form solution based on the sequential minimal optimization (SMO) for training an auto encoder-decoder module, which efficiently extracts sparse and compact features from any data set with various size. The inference process in the proposed learning algorithm does not require any expensive Hessian computation for solving the underlying optimization problems. Decomposition of the non-convex optimization problem in our model enables us to solve each sub-problems analytically. Using several image datasets including CIFAR-10, CALTECH-101 and AR face database, we demonstrate the effectiveness in terms of computation time and classification accuracy. Proposed method discovers dictionaries that are able to capture low level features in larger dimensional patches in quite lower executional time than the other alternatives. Then by detailed experimental results, we present that our module outperforms various similar single layer state-of-the-art methods including Sparse Filtering and K-Means clustering method

    Sparse MDMO: learning a discriminative feature for micro-expression recognition

    Get PDF
    Micro-expressions are the rapid movements of facial muscles that can be used to reveal concealed emotions. Recognizing them from video clips has a wide range of applications and receives increasing attention recently. Among existing methods, the main directional mean optical-flow (MDMO) feature achieves state-of-the-art performance for recognizing spontaneous micro-expressions. For a video clip, the MDMO feature is computed by averaging a set of atomic features frame-by-frame. Despite its simplicity, the average operation in MDMO can easily lose the underlying manifold structure inherent in the feature space. In this paper we propose a sparse MDMO feature that learns an effective dictionary from a micro-expression video dataset. In particular, a new distance metric is proposed based on the sparsity of sample points in the MDMO feature space, which can efficiently reveal the underlying manifold structure. The proposed sparse MDMO feature is obtained by incorporating this new metric into the classic graph regularized sparse coding (GraphSC) scheme. We evaluate sparse MDMO and four representative features (LBP-TOP, STCLQP, MDMO and FDM) on three spontaneous micro-expression datasets (SMIC, CASME and CASME II). The results show that sparse MDMO outperforms these representative features

    Emergence of Invariance and Disentanglement in Deep Representations

    Full text link
    Using established principles from Statistics and Information Theory, we show that invariance to nuisance factors in a deep neural network is equivalent to information minimality of the learned representation, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. We then decompose the cross-entropy loss used during training and highlight the presence of an inherent overfitting term. We propose regularizing the loss by bounding such a term in two equivalent ways: One with a Kullbach-Leibler term, which relates to a PAC-Bayes perspective; the other using the information in the weights as a measure of complexity of a learned model, yielding a novel Information Bottleneck for the weights. Finally, we show that invariance and independence of the components of the representation learned by the network are bounded above and below by the information in the weights, and therefore are implicitly optimized during training. The theory enables us to quantify and predict sharp phase transitions between underfitting and overfitting of random labels when using our regularized loss, which we verify in experiments, and sheds light on the relation between the geometry of the loss function, invariance properties of the learned representation, and generalization error.Comment: Deep learning, neural network, representation, flat minima, information bottleneck, overfitting, generalization, sufficiency, minimality, sensitivity, information complexity, stochastic gradient descent, regularization, total correlation, PAC-Baye
    corecore