83 research outputs found

    Synaptic Theory of Replicator-Like Melioration

    Get PDF
    According to the theory of Melioration, organisms in repeated choice settings shift their choice preference in favor of the alternative that provides the highest return. The goal of this paper is to explain how this learning behavior can emerge from microscopic changes in the efficacies of synapses, in the context of a two-alternative repeated-choice experiment. I consider a large family of synaptic plasticity rules in which changes in synaptic efficacies are driven by the covariance between reward and neural activity. I construct a general framework that predicts the learning dynamics of any decision-making neural network that implements this synaptic plasticity rule and show that melioration naturally emerges in such networks. Moreover, the resultant learning dynamics follows the Replicator equation which is commonly used to phenomenologically describe changes in behavior in operant conditioning experiments. Several examples demonstrate how the learning rate of the network is affected by its properties and by the specifics of the plasticity rule. These results help bridge the gap between cellular physiology and learning behavior

    Naive Few-Shot Learning: Sequence Consistency Evaluation

    Full text link
    Cognitive psychologists often use the term fluid intelligence\textit{fluid intelligence} to describe the ability of humans to solve novel tasks without any prior training. In contrast to humans, deep neural networks can perform cognitive tasks only after extensive (pre-)training with a large number of relevant examples. Motivated by fluid intelligence research in the cognitive sciences, we built a benchmark task which we call sequence consistency evaluation (SCE) that can be used to address this gap. Solving the SCE task requires the ability to extract simple rules from sequences, a basic computation that in humans, is required for solving various intelligence tests. We tested untrained\textit{untrained} (naive) deep learning models in the SCE task. Specifically, we tested two networks that can learn latent relations, Relation Networks (RN) and Contrastive Predictive Coding (CPC). We found that the latter, which imposes a causal structure on the latent relations performs better. We then show that naive few-shot learning of sequences can be successfully used for anomaly detection in two different tasks, visual and auditory, without any prior training

    Do retinal neurons also represent somatosensory inputs? On why neuronal responses are not sufficient to determine what neurons do

    Get PDF
    How does neuronal activity give rise to cognitive capacities? To address this question, neuroscientists hypothesize about what neurons ‘represent’, ‘encode’, or ‘compute’, and test these hypotheses empirically. This process is similar to the assessment of hypotheses in other fields of science and as such is subject to the same limitations and difficulties that have been discussed at length by philosophers of science. In this paper, we highlight an additional difficulty in the process of empirical assessment of hypotheses that is unique to the cognitive sciences. We argue that, unlike in other scientific fields, comparing hypotheses according to the extent to which they explain or predict empirical data can lead to absurd results. Other considerations, which are perhaps more subjective, must be taken into account. We focus on one such consideration, which is the purposeful function of the neurons as part of a biological system. We believe that progress in neuroscience critically depends on properly addressing this difficulty

    Distinct Sources of Deterministic and Stochastic Components of Action Timing Decisions in Rodent Frontal Cortex

    Get PDF
    The selection and timing of actions are subject to determinate influences such as sensory cues and internal state as well as to effectively stochastic variability. Although stochastic choice mechanisms are assumed by many theoretical models, their origin and mechanisms remain poorly understood. Here we investigated this issue by studying how neural circuits in the frontal cortex determine action timing in rats performing a waiting task. Electrophysiological recordings from two regions necessary for this behavior, medial prefrontal cortex (mPFC) and secondary motor cortex (M2), revealed an unexpected functional dissociation. Both areas encoded deterministic biases in action timing, but only M2 neurons reflected stochastic trial-by-trial fluctuations. This differential coding was reflected in distinct timescales of neural dynamics in the two frontal cortical areas. These results suggest a two-stage model in which stochastic components of action timing decisions are injected by circuits downstream of those carrying deterministic bias signals.info:eu-repo/semantics/publishedVersio

    Hippocampal neurons with stable excitatory connectivity become part of neuronal representations

    Get PDF
    Experiences are represented in the brain by patterns of neuronal activity. Ensembles of neurons representing experience undergo activity-dependent plasticity and are important for learning and recall. They are thus considered cellular engrams of memory. Yet, the cellular events that bias neurons to become part of a neuronal representation are largely unknown. In rodents, turnover of structural connectivity has been proposed to underlie the turnover of neuronal representations and also to be a cellular mechanism defining the time duration for which memories are stored in the hippocampus. If these hypotheses are true, structural dynamics of connectivity should be involved in the formation of neuronal representations and concurrently important for learning and recall. To tackle these questions, we used deep-brain 2-photon (2P) time-lapse imaging in transgenic mice in which neurons expressing the Immediate Early Gene (IEG) Arc (activity-regulated cytoskeleton-associated protein) could be permanently labeled during a specific time window. This enabled us to investigate the dynamics of excitatory synaptic connectivity-using dendritic spines as proxies-of hippocampal CA1 (cornu ammonis 1) pyramidal neurons (PNs) becoming part of neuronal representations exploiting Arc as an indicator of being part of neuronal representations. We discovered that neurons that will prospectively express Arc have slower turnover of synaptic connectivity, thus suggesting that synaptic stability prior to experience can bias neurons to become part of representations or possibly engrams. We also found a negative correlation between stability of structural synaptic connectivity and the ability to recall features of a hippocampal-dependent memory, which suggests that faster structural turnover in hippocampal CA1 might be functional for memory

    Robustness of Learning That Is Based on Covariance-Driven Synaptic Plasticity

    Get PDF
    It is widely believed that learning is due, at least in part, to long-lasting modifications of the strengths of synapses in the brain. Theoretical studies have shown that a family of synaptic plasticity rules, in which synaptic changes are driven by covariance, is particularly useful for many forms of learning, including associative memory, gradient estimation, and operant conditioning. Covariance-based plasticity is inherently sensitive. Even a slight mistuning of the parameters of a covariance-based plasticity rule is likely to result in substantial changes in synaptic efficacies. Therefore, the biological relevance of covariance-based plasticity models is questionable. Here, we study the effects of mistuning parameters of the plasticity rule in a decision making model in which synaptic plasticity is driven by the covariance of reward and neural activity. An exact covariance plasticity rule yields Herrnstein's matching law. We show that although the effect of slight mistuning of the plasticity rule on the synaptic efficacies is large, the behavioral effect is small. Thus, matching behavior is robust to mistuning of the parameters of the covariance-based plasticity rule. Furthermore, the mistuned covariance rule results in undermatching, which is consistent with experimentally observed behavior. These results substantiate the hypothesis that approximate covariance-based synaptic plasticity underlies operant conditioning. However, we show that the mistuning of the mean subtraction makes behavior sensitive to the mistuning of the properties of the decision making network. Thus, there is a tradeoff between the robustness of matching behavior to changes in the plasticity rule and its robustness to changes in the properties of the decision making network

    Bayesian Inference Underlies the Contraction Bias in Delayed Comparison Tasks

    Get PDF
    Delayed comparison tasks are widely used in the study of working memory and perception in psychology and neuroscience. It has long been known, however, that decisions in these tasks are biased. When the two stimuli in a delayed comparison trial are small in magnitude, subjects tend to report that the first stimulus is larger than the second stimulus. In contrast, subjects tend to report that the second stimulus is larger than the first when the stimuli are relatively large. Here we study the computational principles underlying this bias, also known as the contraction bias. We propose that the contraction bias results from a Bayesian computation in which a noisy representation of a magnitude is combined with a-priori information about the distribution of magnitudes to optimize performance. We test our hypothesis on choice behavior in a visual delayed comparison experiment by studying the effect of (i) changing the prior distribution and (ii) changing the uncertainty in the memorized stimulus. We show that choice behavior in both manipulations is consistent with the performance of an observer who uses a Bayesian inference in order to improve performance. Moreover, our results suggest that the contraction bias arises during memory retrieval/decision making and not during memory encoding. These results support the notion that the contraction bias illusion can be understood as resulting from optimality considerations
    • …
    corecore