310 research outputs found

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    Artificial ontogenesis: a connectionist model of development

    Get PDF
    This thesis suggests that ontogenetic adaptive processes are important for generating intelligent beha- viour. It is thus proposed that such processes, as they occur in nature, need to be modelled and that such a model could be used for generating artificial intelligence, and specifically robotic intelligence. Hence, this thesis focuses on how mechanisms of intelligence are specified.A major problem in robotics is the need to predefine the behaviour to be followed by the robot. This makes design intractable for all but the simplest tasks and results in controllers that are specific to that particular task and are brittle when faced with unforeseen circumstances. These problems can be resolved by providing the robot with the ability to adapt the rules it follows and to autonomously create new rules for controlling behaviour. This solution thus depends on the predefinition of how rules to control behaviour are to be learnt rather than the predefinition of rules for behaviour themselves.Learning new rules for behaviour occurs during the developmental process in biology. Changes in the structure of the cerebral 'cortex underly behavioural and cognitive development throughout infancy and beyond. The uniformity of the neocortex suggests that there is significant computational uniformity across the cortex resulting from uniform mechanisms of development, and holds out the possibility of a general model of development. Development is an interactive process between genetic predefinition and environmental influences. This interactive process is constructive: qualitatively new behaviours are learnt by using simple abilities as a basis for learning more complex ones. The progressive increase in competence, provided by development, may be essential to make tractable the process of acquiring higher -level abilities.While simple behaviours can be triggered by direct sensory cues, more complex behaviours require the use of more abstract representations. There is thus a need to find representations at the correct level of abstraction appropriate to controlling each ability. In addition, finding the correct level of abstrac- tion makes tractable the task of associating sensory representations with motor actions. Hence, finding appropriate representations is important both for learning behaviours and for controlling behaviours. Representations can be found by recording regularities in the world or by discovering re- occurring pat- terns through repeated sensory -motor interactions. By recording regularities within the representations thus formed, more abstract representations can be found. Simple, non -abstract, representations thus provide the basis for learning more complex, abstract, representations.A modular neural network architecture is presented as a basis for a model of development. The pat- tern of activity of the neurons in an individual network constitutes a representation of the input to that network. This representation is formed through a novel, unsupervised, learning algorithm which adjusts the synaptic weights to improve the representation of the input data. Representations are formed by neurons learning to respond to correlated sets of inputs. Neurons thus became feature detectors or pat- tern recognisers. Because the nodes respond to patterns of inputs they encode more abstract features of the input than are explicitly encoded in the input data itself. In this way simple representations provide the basis for learning more complex representations. The algorithm allows both more abstract represent- ations to be formed by associating correlated, coincident, features together, and invariant representations to be formed by associating correlated, sequential, features together.The algorithm robustly learns accurate and stable representations, in a format most appropriate to the structure of the input data received: it can represent both single and multiple input features in both the discrete and continuous domains, using either topologically or non -topologically organised nodes. The output of one neural network is used to provide inputs for other networks. The robustness of the algorithm enables each neural network to be implemented using an identical algorithm. This allows a modular `assembly' of neural networks to be used for learning more complex abilities: the output activations of a network can be used as the input to other networks which can then find representations of more abstract information within the same input data; and, by defining the output activations of neurons in certain networks to have behavioural consequences it is possible to learn sensory -motor associations, to enable sensory representations to be used to control behaviour

    The fine scale structure of synaptic inputs in developing hippocampal neurons

    Get PDF

    Dopaminergic Regulation of Neuronal Circuits in Prefrontal Cortex

    Get PDF
    Neuromodulators, like dopamine, have considerable influence on the\ud processing capabilities of neural networks. \ud This has for instance been shown in the working memory functions\ud of prefrontal cortex, which may be regulated by altering the\ud dopamine level. Experimental work provides evidence on the biochemical\ud and electrophysiological actions of dopamine receptors, but there are few \ud theories concerning their significance for computational properties \ud (ServanPrintzCohen90,Hasselmo94).\ud We point to experimental data on neuromodulatory regulation of \ud temporal properties of excitatory neurons and depolarization of inhibitory \ud neurons, and suggest computational models employing these effects.\ud Changes in membrane potential may be modelled by the firing threshold,\ud and temporal properties by a parameterization of neuronal responsiveness \ud according to the preceding spike interval.\ud We apply these concepts to two examples using spiking neural networks.\ud In the first case, there is a change in the input synchronization of\ud neuronal groups, which leads to\ud changes in the formation of synchronized neuronal ensembles.\ud In the second case, the threshold\ud of interneurons influences lateral inhibition, and the switch from a \ud winner-take-all network to a parallel feedforward mode of processing.\ud Both concepts are interesting for the modeling of cognitive functions and may\ud have explanatory power for behavioral changes associated with dopamine \ud regulation

    Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers

    Full text link
    This PhD thesis is focused on the central idea that single neurons in the brain should be regarded as temporally precise and highly complex spatio-temporal pattern recognizers. This is opposed to the prevalent view of biological neurons as simple and mainly spatial pattern recognizers by most neuroscientists today. In this thesis, I will attempt to demonstrate that this is an important distinction, predominantly because the above-mentioned computational properties of single neurons have far-reaching implications with respect to the various brain circuits that neurons compose, and on how information is encoded by neuronal activity in the brain. Namely, that these particular "low-level" details at the single neuron level have substantial system-wide ramifications. In the introduction we will highlight the main components that comprise a neural microcircuit that can perform useful computations and illustrate the inter-dependence of these components from a system perspective. In chapter 1 we discuss the great complexity of the spatio-temporal input-output relationship of cortical neurons that are the result of morphological structure and biophysical properties of the neuron. In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific spatio-temporal input patterns with a very simple biologically plausible learning rule. In chapter 3, we use the differentiable deep network analog of a realistic cortical neuron as a tool to approximate the gradient of the output of the neuron with respect to its input and use this capability in an attempt to teach the neuron to perform nonlinear XOR operation. In chapter 4 we expand chapter 3 to describe extension of our ideas to neuronal networks composed of many realistic biological spiking neurons that represent either small microcircuits or entire brain regions

    A model of non-linear interactions between cortical top-down and horizontal connections explains the attentional gating of collinear facilitation

    Get PDF
    AbstractPast physiological and psychophysical experiments have shown that attention can modulate the effects of contextual information appearing outside the classical receptive field of a cortical neuron. Specifically, it has been suggested that attention, operating via cortical feedback connections, gates the effects of long-range horizontal connections underlying collinear facilitation in cortical area V1. This article proposes a novel mechanism, based on the computations performed within the dendrites of cortical pyramidal cells, that can account for these observations. Furthermore, it is shown that the top-down gating signal into V1 can result from a process of biased competition occurring in extrastriate cortex. A model based on these two assumptions is used to replicate the results of physiological and psychophysical experiments on collinear facilitation and attentional modulation

    Top-down Dendritic Input Increases the Gain of Layer 5 Pyramidal Neurons

    Get PDF
    The cerebral cortex is organized so that an important component of feedback input from higher to lower cortical areas arrives at the distal apical tufts of pyramidal neurons. Yet, distal inputs are predicted to have much less impact on firing than proximal inputs. Here we show that even weak asynchronous dendritic input to the distal tuft region can significantly increase the gain of layer 5 pyramidal neurons and thereby the output of columns in the primary somatosensory cortex of the rat. Noisy currents injected in ramps at different dendritic locations showed that the initial slope of the frequency-current (f/I) relationship increases with the distance of the current injection from the soma. The increase was due to the interaction of dendritic depolarization with back-propagating APs which activated dendritic calcium conductances. Gain increases were accompanied by a change of firing mode from isolated spikes to bursting where the timing of bursts coded the presence of coincident somatic and dendritic inputs. We propose that this dendritic gain modulation and the timing of bursts may serve to associate top-down and bottom-up input on different time scale

    Revealing the distribution of transmembrane currents along the dendritic tree of a neuron from extracellular recordings.

    Get PDF
    Revealing the current source distribution along the neuronal membrane is a key step on the way to understanding neural computations, however, the experimental and theoretical tools to achieve sufficient spatiotemporal resolution for the estimation remain to be established. Here we address this problem using extracellularly recorded potentials with arbitrarily distributed electrodes for a neuron of known morphology. We use simulations of models with varying complexity to validate the proposed method and to give recommendations for experimental applications. The method is applied to in vitro data from rat hippocampus

    Towards Brains in the Cloud: A Biophysically Realistic Computational Model of Olfactory Bulb

    Get PDF
    abstract: The increasing availability of experimental data and computational power have resulted in increasingly detailed and sophisticated models of brain structures. Biophysically realistic models allow detailed investigations of the mechanisms that operate within those structures. In this work, published mouse experimental data were synthesized to develop an extensible, open-source platform for modeling the mouse main olfactory bulb and other brain regions. A “virtual slice” model of a main olfactory bulb glomerular column that includes detailed models of tufted, mitral, and granule cells was created to investigate the underlying mechanisms of a gamma frequency oscillation pattern (“gamma fingerprint”) often observed in rodent bulbar local field potential recordings. The gamma fingerprint was reproduced by the model and a mechanistic hypothesis to explain aspects of the fingerprint was developed. A series of computational experiments tested the hypothesis. The results demonstrate the importance of interactions between electrical synapses, principal cell synaptic input strength differences, and granule cell inhibition in the formation of the gamma fingerprint. The model, data, results, and reproduction materials are accessible at https://github.com/justasb/olfactorybulb. The discussion includes a detailed description of mechanisms underlying the gamma fingerprint and how the model predictions can be tested experimentally. In summary, the modeling platform can be extended to include other types of cells, mechanisms and brain regions and can be used to investigate a wide range of experimentally testable hypotheses.Dissertation/ThesisDoctoral Dissertation Neuroscience 201
    • 

    corecore