65 research outputs found

    Biologically Plausible, Human-scale Knowledge Representation

    Get PDF
    Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony, mesh binding, and tensor product binding. Recent theoretical work has suggested that these methods will not scale well; that is, they cannot encode human-sized structured representations without making implausible resource assumptions. Here I present an approach that will scale appropriately, which is based on the Semantic Pointer Architecture. Specifically, I construct a spiking neural network composed of about 2.5 million neurons that employs semantic pointers to encode and decode the main lexical relations in WordNet, a semantic network containing over 117,000 concepts. I experimentally demonstrate the capabilities of this model by measuring its performance on three tasks which test its ability to accurately traverse the WordNet hierarchy, as well as its ability to decode sentences involving WordNet concepts. I argue that these results show that this approach is uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition. I conclude with an investigation of how the connection weights in this spiking neural network can be learned online through biologically plausible learning rules

    A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition

    Get PDF
    This study introduces PV-RNN, a novel variational RNN inspired by the predictive-coding ideas. The model learns to extract the probabilistic structures hidden in fluctuating temporal patterns by dynamically changing the stochasticity of its latent states. Its architecture attempts to address two major concerns of variational Bayes RNNs: how can latent variables learn meaningful representations and how can the inference model transfer future observations to the latent variables. PV-RNN does both by introducing adaptive vectors mirroring the training data, whose values can then be adapted differently during evaluation. Moreover, prediction errors during backpropagation, rather than external inputs during the forward computation, are used to convey information to the network about the external data. For testing, we introduce error regression for predicting unseen sequences as inspired by predictive coding that leverages those mechanisms. The model introduces a weighting parameter, the meta-prior, to balance the optimization pressure placed on two terms of a lower bound on the marginal likelihood of the sequential data. We test the model on two datasets with probabilistic structures and show that with high values of the meta-prior the network develops deterministic chaos through which the data's randomness is imitated. For low values, the model behaves as a random process. The network performs best on intermediate values, and is able to capture the latent probabilistic structure with good generalization. Analyzing the meta-prior's impact on the network allows to precisely study the theoretical value and practical benefits of incorporating stochastic dynamics in our model. We demonstrate better prediction performance on a robot imitation task with our model using error regression compared to a standard variational Bayes model lacking such a procedure.Comment: The paper is accepted in Neural Computatio

    Predictive processing and mental representation

    Get PDF
    According to some (e.g. Friston, 2010) predictive processing (PP) models of cognition have the potential to offer a grand unifying theory of cognition. The framework defines a flexible architecture governed by one simple principle – minimise error. The process of Bayesian inference used to achieve this goal results in an ongoing flow of prediction that both makes sense of perception and unifies it with action. Such a provocative and appealing theory naturally has caused ripples in philosophical circles, prompting several commentaries (e.g. Hohwy, 2012; Clark, 2016). This thesis tackles one outstanding philosophical problem in relation to PP – the question of mental representation. In attempting to understand the nature of mental representations in PP systems I touch on several contentious points in philosophy of cognitive science, including the explanatory power of mechanisms vs. dynamics, the internalism vs. externalism debate, and the knotty problem of proper biological function. Exploring these issues enables me to offer a speculative solution to the question of mental representation in PP systems, with further implications for understanding mental representation in a broader context. The result is a conception of mind that is deeply continuous with life. With an explanation of how normativity emerges in certain classes of self-maintaining systems of which cognitive systems are a subset. We discover the possibility of a harmonious union between mechanics and dynamics necessary for making sense of PP systems, each playing an indispensable role in our understanding of their internal representations
    • …
    corecore