20,568 research outputs found

    Slow feature analysis yields a rich repertoire of complex cell properties

    Get PDF
    In this study, we investigate temporal slowness as a learning principle for receptive fields using slow feature analysis, a new algorithm to determine functions that extract slowly varying signals from the input data. We find that the learned functions trained on image sequences develop many properties found also experimentally in complex cells of primary visual cortex, such as direction selectivity, non-orthogonal inhibition, end-inhibition and side-inhibition. Our results demonstrate that a single unsupervised learning principle can account for such a rich repertoire of receptive field properties

    Learning Hybrid System Models for Supervisory Decoding of Discrete State, with applications to the Parietal Reach Region

    Get PDF
    Based on Gibbs sampling, a novel method to identify mathematical models of neural activity in response to temporal changes of behavioral or cognitive state is presented. This work is motivated by the developing field of neural prosthetics, where a supervisory controller is required to classify activity of a brain region into suitable discrete modes. Here, neural activity in each discrete mode is modeled with nonstationary point processes, and transitions between modes are modeled as hidden Markov models. The effectiveness of this framework is first demonstrated on a simulated example. The identification algorithm is then applied to extracellular neural activity recorded from multi-electrode arrays in the parietal reach region of a rhesus monkey, and the results demonstrate the ability to decode discrete changes even from small data sets

    Training Probabilistic Spiking Neural Networks with First-to-spike Decoding

    Full text link
    Third-generation neural networks, or Spiking Neural Networks (SNNs), aim at harnessing the energy efficiency of spike-domain processing by building on computing elements that operate on, and exchange, spikes. In this paper, the problem of training a two-layer SNN is studied for the purpose of classification, under a Generalized Linear Model (GLM) probabilistic neural model that was previously considered within the computational neuroscience literature. Conventional classification rules for SNNs operate offline based on the number of output spikes at each output neuron. In contrast, a novel training method is proposed here for a first-to-spike decoding rule, whereby the SNN can perform an early classification decision once spike firing is detected at an output neuron. Numerical results bring insights into the optimal parameter selection for the GLM neuron and on the accuracy-complexity trade-off performance of conventional and first-to-spike decoding.Comment: A shorter version will be published on Proc. IEEE ICASSP 201

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature
    corecore