17 research outputs found

    Learning probabilistic neural representations with randomly connected circuits

    Get PDF
    The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation

    Extreme Data Mining: Inference from Small Datasets

    Get PDF
    Neural networks have been applied successfully in many fields. However, satisfactory results can only be found under large sample conditions. When it comes to small training sets, the performance may not be so good, or the learning task can even not be accomplished. This deficiency limits the applications of neural network severely. The main reason why small datasets cannot provide enough information is that there exist gaps between samples, even the domain of samples cannot be ensured. Several computational intelligence techniques have been proposed to overcome the limits of learning from small datasets. We have the following goals: i. To discuss the meaning of small in the context of inferring from small datasets. ii. To overview computational intelligence solutions for this problem. iii. To illustrate the introduced concepts with a real-life application

    Combining additive input noise annealing and pattern transformations for improved handwritten character recognition

    Get PDF
    Two problems that burden the learning process of Artificial Neural Networks with Back Propagation are the need of building a full and representative learning data set, and the avoidance of stalling in local minima. Both problems seem to be closely related when working with the handwritten digits contained in the MNIST dataset. Using a modest sized ANN, the proposed combination of input data transformations enables the achievement of a test error as low as 0.43%, which is up to standard compared to other more complex neural architectures like Convolutional or Deep Neural Networks. © 2014 Elsevier Ltd. All rights reserved.This research reported has been supported by the Spanish MICINN under projects TRA2010-20225-C03-01, TRA 2011-29454-C03-02, and TRA 2011-29454-C03-03

    DATA DENOISING PROCEDURE FOR NEURAL NETWORK PERFORMANCE IMPROVEMENT

    Get PDF
    This paper will present training data denoising procedure for neural network performance improvement. Performance improvement will be measured by evaluation criterion which is based on a training estimation error and signal strength factor. Strength factor will be obtained by applying denoising method on a default training signal. The method is based on a noise removal procedure performed on the original signal in a manner which is defined by the proposed algorithm. Ten different processed signals are obtained from the performed method on a default noisy signal. Those signals are then used as a training data for the nonlinear autoregressive neural network learning phase. Empirical comparisons are made at the end, and they show that the proposed denoising procedure is an effective way to improve network performances when the training set possesses the significant noise component
    corecore