201 research outputs found
Study of classical conditioning in Aplysia through the implementation of computational models of its learning circuit
“This is an Accepted Manuscript of an article published by Taylor & Francis in Journal of Experimental & Theoretical Artificial Intelligence on 04 Jul 2007, available online: http://wwww.tandfonline.com/DOI:10.1080/09528130601052177.”The learning phenomenon can be analysed at various levels, but in this
paper we treat a specific paradigm of artificial intelligence, i.e. artificial
neural networks (ANNs), whose main virtue is their capacity to seek
unified and mutually satisfactory solutions which are relevant to
biological and psychological models. Many of the procedures and
methods proposed previously have used biological and/or psychological
principles, models, and data; here, we focus on models which look for a
greater degree of coherence. Therefore we analyse and compare all
aspects of the Gluck–Thompson and Hawkins ANN models. A multithread
computer model is developed for analysis of these models in order
to study simple learning phenomena in a marine invertebrate (Aplysia
californica) and to check their applicability to research in psychology and
neurobiology. The predictive capacities of the models differs significantly:
the Hawkins model provides a better analysis of the behavioural
repertory of Aplysia on both the associative and the non-associative
learning level. The scope of the ANN modelling technique is broadened
by integration with neurobiological and behavioural models of
associative learning, allowing enhancement of some architectures and
procedures that are currently being used
A Neural Model of Biased Oscillations in Aplysia Head-Waving Behavior
A long-term bias in the exploratory head-waving behavior of Aplysia can be induced using bright lights as an aversive stimulus: coupling onset of the lights with head movements to one side results in a bias away from that side (Cook & Carew, 1986). This bias has been interpreted as a form of operant conditioning, and has previously been simulated with a neural network model based on associative synaptic facilitation (Raymond, Baxter, Buonomano, & Byrne, 1992). In this article we simulate the head-waving behavior using a recurrent gated dipole, a nonlinear dynamical neural model that has previously been used to explain various data including oscillatory behavior in biological pacemakers. Within the recurrent gated dipole, two channels operate antagonistically to generate oscillations, which drive the side-to-side head waving. The frequency of oscillations depends on transmitter mobilization dynamics, which exhibit both short- and long-term adaptation. We assume that light onset results in a nonspecific increase in arousal to both channels of the dipole. Repeated pairing of arousal increments with activation of one channel (the "reinforced" channel) of the dipole leads to a bias in transmitter dynamics, which causes the oscillation to last a shorter time on the reinforced channel than on the non-reinforced channel. Our model provides a parsimonious explanation of the observed behavior, and it avoids some of the unexpected results obtained with the Raymond et al. model. In addition, our model makes predictions concerning the rate of onset and extinction of the biases, and it suggests new lines of experimentation to test the nature of the head-waving behavior.Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100, N0014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); A.P. Sloan Foundation (BR-3122
Event Timing in Associative Learning
Associative learning relies on event timing. Fruit flies for example, once trained with an odour that precedes electric shock, subsequently avoid this odour (punishment learning); if, on the other hand the odour follows the shock during training, it is approached later on (relief learning). During training, an odour-induced Ca++ signal and a shock-induced dopaminergic signal converge in the Kenyon cells, synergistically activating a Ca++-calmodulin-sensitive adenylate cyclase, which likely leads to the synaptic plasticity underlying the conditioned avoidance of the odour. In Aplysia, the effect of serotonin on the corresponding adenylate cyclase is bi-directionally modulated by Ca++, depending on the relative timing of the two inputs. Using a computational approach, we quantitatively explore this biochemical property of the adenylate cyclase and show that it can generate the effect of event timing on associative learning. We overcome the shortage of behavioural data in Aplysia and biochemical data in Drosophila by combining findings from both systems
Grounding Mental Representations in a Virtual Multi-Level Functional Framework
According to the associative theory of learning, reactive behaviors described by stimulus-response pairs result in the progressive wiring of a plastic brain. In contrast, flexible behaviors are supposedly driven by neurologically grounded mental states that involve computations on informational contents. These theories appear complementary, but are generally opposed to each other. The former is favored by neuro-scientists who explore the low-level biological processes supporting cognition, and the later by cognitive psychologists who look for higher-level structures. This situation can be clarified through an analysis that independently defines abstract neurological and informational functionalities, and then relate them through a virtual interface. This framework is validated through a modeling of the first stage of Piaget’s cognitive development theory, whose reported end experiments demonstrate the emergence of mental representations of object displacements. The neural correlates grounding this emergence are given in the isomorphic format of an associative memory. As a child’s exploration of the world progresses, his mental models will eventually include representations of space, time and causality. Only then epistemological concepts, such as beliefs, will give rise to higher level mental representations in a possibly richer propositional format. This raises the question of which additional neurological functionalities, if any, would be required in order to include these extensions into a comprehensive grounded model. We relay previously expressed views, which in summary hypothesize that the ability to learn has evolved from associative reflexes and memories, to suggest that the functionality of associative memories could well provide the sufficient means for grounding cognitive capacities
Olfactory learning in Drosophila
Animals are able to form associative memories and benefit from past experience. In classical conditioning an animal is trained to associate an initially neutral stimulus by pairing it with a stimulus that triggers an innate response. The neutral stimulus is commonly referred to as conditioned stimulus (CS) and the reinforcing stimulus as unconditioned stimulus (US). The underlying neuronal mechanisms and structures are an intensely investigated topic.
The fruit fly Drosophila melanogaster is a prime model animal to investigate the mechanisms of learning. In this thesis we propose fundamental circuit motifs that explain aspects of aversive olfactory learning as it is observed in the fruit fly. Changing parameters of the learning paradigm affects the behavioral outcome in different ways.
The relative timing between CS and US affects the hedonic value of the CS. Reversing the order changes the behavioral response from conditioned avoidance to conditioned approach. We propose a timing-dependent biochemical reaction cascade, which can account for this phenomenon.
In addition to form odor-specific memories, flies are able to associate a specific odor intensity. In aversive olfactory conditioning they show less avoidance to lower and higher intensities of the same odor. However the layout of the first two olfactory processing layers does not support this kind of learning due to a nested representation of odor intensity. We propose a basic circuit motif that transforms the nested monotonic intensity representation to a non-monotonic representation that supports intensity specific learning.
Flies are able to bridge a stimulus free interval between CS and US to form an association. It is unclear so far where the stimulus trace of the CS is represented in the fly's nervous system. We analyze recordings from the first three layers of olfactory processing with an advanced machine learning approach. We argue that third order neurons are likely to harbor the stimulus trace
Towards neuro-inspired symbolic models of cognition: linking neural dynamics to behaviors through asynchronous communications
A computational architecture modeling the relation between perception and action is proposed. Basic brain processes representing synaptic plasticity are first abstracted through asynchronous communication protocols and implemented as virtual microcircuits. These are used in turn to build mesoscale circuits embodying parallel cognitive processes. Encoding these circuits into symbolic expressions gives finally rise to neuro-inspired programs that are compiled into pseudo-code to be interpreted by a virtual machine. Quantitative evaluation measures are given by the modification of synapse weights over time. This approach is illustrated by models of simple forms of behaviors exhibiting cognition up to the third level of animal awareness. As a potential benefit, symbolic models of emergent psychological mechanisms could lead to the discovery of the learning processes involved in the development of cognition. The executable specifications of an experimental platform allowing for the reproduction of simulated experiments are given in “Appendix”
Synthetic associative learning in engineered multicellular consortia
Associative learning is one of the key mechanisms displayed by living
organisms in order to adapt to their changing environments. It was early
recognized to be a general trait of complex multicellular organisms but also
found in "simpler" ones. It has also been explored within synthetic biology
using molecular circuits that are directly inspired in neural network models of
conditioning. These designs involve complex wiring diagrams to be implemented
within one single cell and the presence of diverse molecular wires become a
challenge that might be very difficult to overcome. Here we present three
alternative circuit designs based on two-cell microbial consortia able to
properly display associative learning responses to two classes of stimuli and
displaying long and short-term memory (i. e. the association can be lost with
time). These designs might be a helpful approach for engineering the human gut
microbiome or even synthetic organoids, defining a new class of decision-making
biological circuits capable of memory and adaptation to changing conditions.
The potential implications and extensions are outlined.Comment: 5 figure
Modeling the Synchronization of Multimodal Perceptions as a Basis for the Emergence of Deterministic Behaviors.
Living organisms have either innate or acquired mechanisms for reacting to percepts with an appropriate behavior e.g., by escaping from the source of a perception detected as threat, or conversely by approaching a target perceived as potential food. In the case of artifacts, such capabilities must be built in through either wired connections or software. The problem addressed here is to define a neural basis for such behaviors to be possibly learned by bio-inspired artifacts. Toward this end, a thought experiment involving an autonomous vehicle is first simulated as a random search. The stochastic decision tree that drives this behavior is then transformed into a plastic neuronal circuit. This leads the vehicle to adopt a deterministic behavior by learning and applying a causality rule just as a conscious human driver would do. From there, a principle of using synchronized multimodal perceptions in association with the Hebb principle of wiring together neuronal cells is induced. This overall framework is implemented as a virtual machine i.e., a concept widely used in software engineering. It is argued that such an interface situated at a meso-scale level between abstracted micro-circuits representing synaptic plasticity, on one hand, and that of the emergence of behaviors, on the other, allows for a strict delineation of successive levels of complexity. More specifically, isolating levels allows for simulating yet unknown processes of cognition independently of their underlying neurological grounding
Event Timing in Associative Learning: From Biochemical Reaction Dynamics to Behavioural Observations
Associative learning relies on event timing. Fruit flies for example, once trained with an odour that precedes electric shock, subsequently avoid this odour (punishment learning); if, on the other hand the odour follows the shock during training, it is approached later on (relief learning). During training, an odour-induced Ca++ signal and a shock-induced dopaminergic signal converge in the Kenyon cells, synergistically activating a Ca++-calmodulin-sensitive adenylate cyclase, which likely leads to the synaptic plasticity underlying the conditioned avoidance of the odour. In Aplysia, the effect of serotonin on the corresponding adenylate cyclase is bi-directionally modulated by Ca++, depending on the relative timing of the two inputs. Using a computational approach, we quantitatively explore this biochemical property of the adenylate cyclase and show that it can generate the effect of event timing on associative learning. We overcome the shortage of behavioural data in Aplysia and biochemical data in Drosophila by combining findings from both systems
Recommended from our members
A Heterosynaptic Spiking Neural System for the Development of Autonomous Agents
Artificial neural systems for computation were first proposed three quarters of a century ago and the concepts developed by the pioneers still shape the field today. The first generation of neural systems was developed in the nineteen forties in the context of analogue electronics and the theoretical research in logic and mathematics that led to the first digital computers in nineteen forties and fifties. The second generation of neural systems implemented on digital computers was born in the nineteen fifties and great progress was made in the subsequent half century with neural networks being applied to many problems in pattern recognition and machine learning. Through this history there has been an interplay between biologically inspired neural systems and their implementation by engineers on digital machines. This thesis concerns the third generation of neural networks, Spiking Neural Networks, which is making possible the creation of new kinds of brain inspired computing architectures that offer the potential to increase the level of realism and sophistication in terms of autonomous machine behaviour and cognitive computing. This thesis presents the development and demonstration of a new theoretical architecture for third generation neural systems, the Integrate-and-Fire based Spiking Neural Model with extended Neuro-modulated Spike Timing Dependent Plasticity capabilities. This proposed architecture overcomes the limitation of the homosynaptic architecture underlying existing implementations of spiking neural networks that it lacks a natural spike timing dependent plasticity regulation mechanism, and this results in ‘run away’ dynamics. To overcome this ad hoc procedures have been implemented to overcome the ‘run away’ dynamics that emerge from the use of spike timing dependent plasticity among other hebbian-based plasticity rules. The new heterosynaptic architecture presented, explicitly abstracts the modulation of complex biochemical mechanisms into a simplified mechanism that is suitable for the engineering of artificial systems with low computational complexity. Neurons work by receiving input signals from other neurons through synapses. The difference between homosynaptic and heterosynaptic plasticity is that, in the former the change in the properties of a synapse (e.g. synaptic efficacy) depends on the point to point activity in either of the sending and receiving neurons, in contrast for heterosynaptic plasticity the change in the properties of a synapse can be elicited by neurons that are not necessary presynaptic or postsynaptic to the synapse in question. The new architecture is tested by a number of implementations in simulated and real environments. This includes experiments with a simulation environment implemented in Netlogo, and an implementation using Lego Mindstorms as the physical robot platform. These experiments demonstrate the problems with the traditional Spike timing dependent plasticity homosynaptic architecture and how the new heterosynaptic approach can overcome them. It is concluded that the new theoretical architecture provides a natural, theoretically sound, and practical new direction for research into the role of modulatory neural systems applied to spiking neural networks
- …