341 research outputs found
Recommended from our members
Rule learning by Habituation can be Simulated in Neural Networks
Contrary to a recent claim that neural network models are unable to account for data on infant habituation to artificial language sentences, the present simulations show successful coverage with cascade-correlation networks using analog encoding. The results demonstrate that a symbolic rule-based account is not required by the infant data
Coupled feedback loops maintain synaptic long-term potentiation: A computational model of PKMzeta synthesis and AMPA receptor trafficking
In long-term potentiation (LTP), one of the most studied types of neural
plasticity, synaptic strength is persistently increased in response to
stimulation. Although a number of different proteins have been implicated in
the sub-cellular molecular processes underlying induction and maintenance of
LTP, the precise mechanisms remain unknown. A particular challenge is to
demonstrate that a proposed molecular mechanism can provide the level of
stability needed to maintain memories for months or longer, in spite of the
fact that many of the participating molecules have much shorter life spans.
Here we present a computational model that combines simulations of several
biochemical reactions that have been suggested in the LTP literature and show
that the resulting system does exhibit the required stability. At the core of
the model are two interlinked feedback loops of molecular reactions, one
involving the atypical protein kinase PKM{\zeta} and its messenger RNA, the
other involving PKM{\zeta} and GluA2-containing AMPA receptors. We demonstrate
that robust bistability - stable equilibria both in the synapse's potentiated
and unpotentiated states - can arise from a set of simple molecular reactions.
The model is able to account for a wide range of empirical results, including
induction and maintenance of late-phase LTP, cellular memory reconsolidation
and the effects of different pharmaceutical interventions
Outgroup Homogeneity Bias Causes Ingroup Favoritism
Ingroup favoritism, the tendency to favor ingroup over outgroup, is often
explained as a product of intergroup conflict, or correlations between group
tags and behavior. Such accounts assume that group membership is meaningful,
whereas human data show that ingroup favoritism occurs even when it confers no
advantage and groups are transparently arbitrary. Another possibility is that
ingroup favoritism arises due to perceptual biases like outgroup homogeneity,
the tendency for humans to have greater difficulty distinguishing outgroup
members than ingroup ones. We present a prisoner's dilemma model, where
individuals use Bayesian inference to learn how likely others are to cooperate,
and then act rationally to maximize expected utility. We show that, when such
individuals exhibit outgroup homogeneity bias, ingroup favoritism between
arbitrary groups arises through direct reciprocity. However, this outcome may
be mitigated by: (1) raising the benefits of cooperation, (2) increasing
population diversity, and (3) imposing a more restrictive social structure.Comment: 7 pages, 9 figure
Recommended from our members
Consonance Network Simulations of Arousal Phenomena in Cognitive Dissonance
The consonance constraint satisfaction model, recently used to simulate the major paradigms of cognitive dissonance theory, is extended to deal with emotional arousal phenomena in dissonance. The impact of arousing drugs is implemented in the simulations by a scalar that modulates the intensity of unit activations representing the relevant cognitions and the connection weights representing their implications. The simulations show that even exotic dissonance phenomena can be explained in terms of the relatively commn process of constraint satisfaction
Recommended from our members
Computational Power and Realistic Cognitive Development
We explore the ability of a static connectionist algorithm to model children's acquisition of velocity, time, and distance concepts under architectures of different levels of computational power. Diagnosis of rules learned by networks indicated that static networks were either too powerful or too weak to capture the developmental course of children's concepts. Networks with too much power missed intermediate stages; those with too little power failed to reach terminal stages. These results were robust under a variety of learning parameter values. We argue that a generative connectionist algorithm provides a better model of development of these concepts by gradually increasing representational power
- …