3,312 research outputs found

    Effective Auto Encoder For Unsupervised Sparse Representation

    Get PDF
    High dimensionality and the sheer size of unlabeled data available today demand new development in unsupervised learning of sparse representation. Despite of recent advances in representation learning, most of the current methods are limited when dealing with large scale unlabeled data. In this study, we propose a new unsupervised method that is able to learn sparse representation from unlabeled data efficiently. We derive a closed-form solution based on the sequential minimal optimization (SMO) for training an auto encoder-decoder module, which efficiently extracts sparse and compact features from any data set with various size. The inference process in the proposed learning algorithm does not require any expensive Hessian computation for solving the underlying optimization problems. Decomposition of the non-convex optimization problem in our model enables us to solve each sub-problems analytically. Using several image datasets including CIFAR-10, CALTECH-101 and AR face database, we demonstrate the effectiveness in terms of computation time and classification accuracy. Proposed method discovers dictionaries that are able to capture low level features in larger dimensional patches in quite lower executional time than the other alternatives. Then by detailed experimental results, we present that our module outperforms various similar single layer state-of-the-art methods including Sparse Filtering and K-Means clustering method

    Effective Auto Encoder For Unsupervised Sparse Representation

    Get PDF
    High dimensionality and the sheer size of unlabeled data available today demand new development in unsupervised learning of sparse representation. Despite of recent advances in representation learning, most of the current methods are limited when dealing with large scale unlabeled data. In this study, we propose a new unsupervised method that is able to learn sparse representation from unlabeled data efficiently. We derive a closed-form solution based on the sequential minimal optimization (SMO) for training an auto encoder-decoder module, which efficiently extracts sparse and compact features from any data set with various size. The inference process in the proposed learning algorithm does not require any expensive Hessian computation for solving the underlying optimization problems. Decomposition of the non-convex optimization problem in our model enables us to solve each sub-problems analytically. Using several image datasets including CIFAR-10, CALTECH-101 and AR face database, we demonstrate the effectiveness in terms of computation time and classification accuracy. Proposed method discovers dictionaries that are able to capture low level features in larger dimensional patches in quite lower executional time than the other alternatives. Then by detailed experimental results, we present that our module outperforms various similar single layer state-of-the-art methods including Sparse Filtering and K-Means clustering method

    Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots

    Get PDF
    Building a humanlike integrative artificial cognitive system, that is, an artificial general intelligence, is one of the goals in artificial intelligence and developmental robotics. Furthermore, a computational model that enables an artificial cognitive system to achieve cognitive development will be an excellent reference for brain and cognitive science. This paper describes the development of a cognitive architecture using probabilistic generative models (PGMs) to fully mirror the human cognitive system. The integrative model is called a whole-brain PGM (WB-PGM). It is both brain-inspired and PGMbased. In this paper, the process of building the WB-PGM and learning from the human brain to build cognitive architectures is described.Comment: 55 pages, 8 figures, submitted to Neural Network

    Book reports

    Get PDF

    Perceptual Consciousness and Cognitive Access from the Perspective of Capacity-Unlimited Working Memory

    Get PDF
    Theories of consciousness divide over whether perceptual consciousness is rich or sparse in specific representational content and whether it requires cognitive access. These two issues are often treated in tandem because of a shared assumption that the representational capacity of cognitive access is fairly limited. Recent research on working memory challenges this shared assumption. This paper argues that abandoning the assumption undermines post-cue-based “overflow” arguments, according to which perceptual conscious is rich and does not require cognitive access. Abandoning it also dissociates the rich/sparse debate from the access question. The paper then explores attempts to reformulate overflow theses in ways that don’t require the assumption of limited capacity. Finally, it discusses the problem of relating seemingly non-probabilistic perceptual consciousness to the probabilistic representations posited by the models that challenge conceptions of cognitive access as capacity-limited

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
    • …
    corecore