49 research outputs found
Recommended from our members
Efficient spiking neural network model of pattern motion selectivity in visual cortex
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction- selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40∈×∈40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available. © 2014 Springer Science+Business Media New York
A flexible component-based robot control architecture for hormonal modulation of behaviour and affect
This document is the Accepted Manuscritpt of a paper published in Proceedings of 18th Annual Conference, TAROS 2017, Guildford, UK, July 19–21, 2017. Under embargo. Embargo end date: 20 July 2018. The final publication is available at Springer via https://link.springer.com/chapter/10.1007%2F978-3-319-64107-2_36. © 2017 Springer, Cham.In this paper we present the foundations of an architecture that will support the wider context of our work, which is to explore the link between affect, perception and behaviour from an embodied perspective and assess their relevance to Human Robot Interaction (HRI). Our approach builds upon existing affect-based architectures by combining artificial hormones with discrete abstract components that are designed with the explicit consideration of influencing, and being receptive to, the wider affective state of the robot
Dendritic Morphology Predicts Pattern Recognition Performance in Multi-compartmental Model Neurons with and without Active Conductances
This is an Open Access article published under the Creative Commons Attribution license CC BY 4.0 which allows users to read, copy, distribute and make derivative works, as long as the author of the original work is citedIn this paper we examine how a neuron’s dendritic morphology can affect its pattern recognition performance. We use two different algorithms to systematically explore the space of dendritic morphologies: an algorithm that generates all possible dendritic trees with 22 terminal points, and one that creates representative samples of trees with 128 terminal points. Based on these trees, we construct multi-compartmental models. To assess the performance of the resulting neuronal models, we quantify their ability to discriminate learnt and novel input patterns. We find that the dendritic morphology does have a considerable effect on pattern recognition performance and that the neuronal performance is inversely correlated with the mean depth of the dendritic tree. The results also reveal that the asymmetry index of the dendritic tree does not correlate with the performance for the full range of tree morphologies. The performance of neurons with dendritic tapering is best predicted by the mean and variance of the electrotonic distance of their synapses to the soma. All relationships found for passive neuron models also hold, even in more accentuated form, for neurons with active membranesPeer reviewedFinal Published versio
Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison
A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications
Convergence among Non-Sister Dendritic Branches: An Activity-Controlled Mean to Strengthen Network Connectivity
The manner by which axons distribute synaptic connections along dendrites remains a fundamental unresolved issue in neuronal development and physiology. We found in vitro and in vivo indications that dendrites determine the density, location and strength of their synaptic inputs by controlling the distance of their branches from those of their neighbors. Such control occurs through collective branch convergence, a behavior promoted by AMPA and NMDA glutamate receptor activity. At hubs of convergence sites, the incidence of axo-dendritic contacts as well as clustering levels, pre- and post-synaptic protein content and secretion capacity of synaptic connections are higher than found elsewhere. This coupling between synaptic distribution and the pattern of dendritic overlapping results in ‘Economical Small World Network’, a network configuration that enables single axons to innervate multiple and remote dendrites using short wiring lengths. Thus, activity-mediated regulation of the proximity among dendritic branches serves to pattern and strengthen neuronal connectivity
Automated Three-Dimensional Detection and Shape Classification of Dendritic Spines from Fluorescence Microscopy Images
A fundamental challenge in understanding how dendritic spine morphology controls learning and memory has been quantifying three-dimensional (3D) spine shapes with sufficient precision to distinguish morphologic types, and sufficient throughput for robust statistical analysis. The necessity to analyze large volumetric data sets accurately, efficiently, and in true 3D has been a major bottleneck in deriving reliable relationships between altered neuronal function and changes in spine morphology. We introduce a novel system for automated detection, shape analysis and classification of dendritic spines from laser scanning microscopy (LSM) images that directly addresses these limitations. The system is more accurate, and at least an order of magnitude faster, than existing technologies. By operating fully in 3D the algorithm resolves spines that are undetectable with standard two-dimensional (2D) tools. Adaptive local thresholding, voxel clustering and Rayburst Sampling generate a profile of diameter estimates used to classify spines into morphologic types, while minimizing optical smear and quantization artifacts. The technique opens new horizons on the objective evaluation of spine changes with synaptic plasticity, normal development and aging, and with neurodegenerative disorders that impair cognitive function
Impact of Dendritic Size and Dendritic Topology on Burst Firing in Pyramidal Cells
Neurons display a wide range of intrinsic firing patterns. A particularly relevant pattern for neuronal signaling and synaptic plasticity is burst firing, the generation of clusters of action potentials with short interspike intervals. Besides ion-channel composition, dendritic morphology appears to be an important factor modulating firing pattern. However, the underlying mechanisms are poorly understood, and the impact of morphology on burst firing remains insufficiently known. Dendritic morphology is not fixed but can undergo significant changes in many pathological conditions. Using computational models of neocortical pyramidal cells, we here show that not only the total length of the apical dendrite but also the topological structure of its branching pattern markedly influences inter- and intraburst spike intervals and even determines whether or not a cell exhibits burst firing. We found that there is only a range of dendritic sizes that supports burst firing, and that this range is modulated by dendritic topology. Either reducing or enlarging the dendritic tree, or merely modifying its topological structure without changing total dendritic length, can transform a cell's firing pattern from bursting to tonic firing. Interestingly, the results are largely independent of whether the cells are stimulated by current injection at the soma or by synapses distributed over the dendritic tree. By means of a novel measure called mean electrotonic path length, we show that the influence of dendritic morphology on burst firing is attributable to the effect both dendritic size and dendritic topology have, not on somatic input conductance, but on the average spatial extent of the dendritic tree and the spatiotemporal dynamics of the dendritic membrane potential. Our results suggest that alterations in size or topology of pyramidal cell morphology, such as observed in Alzheimer's disease, mental retardation, epilepsy, and chronic stress, could change neuronal burst firing and thus ultimately affect information processing and cognition
Order in Spontaneous Behavior
Brains are usually described as input/output systems: they transform sensory input into motor output. However, the motor output of brains (behavior) is notoriously variable, even under identical sensory conditions. The question of whether this behavioral variability merely reflects residual deviations due to extrinsic random noise in such otherwise deterministic systems or an intrinsic, adaptive indeterminacy trait is central for the basic understanding of brain function. Instead of random noise, we find a fractal order (resembling Lévy flights) in the temporal structure of spontaneous flight maneuvers in tethered Drosophila fruit flies. Lévy-like probabilistic behavior patterns are evolutionarily conserved, suggesting a general neural mechanism underlying spontaneous behavior. Drosophila can produce these patterns endogenously, without any external cues. The fly's behavior is controlled by brain circuits which operate as a nonlinear system with unstable dynamics far from equilibrium. These findings suggest that both general models of brain function and autonomous agents ought to include biologically relevant nonlinear, endogenous behavior-initiating mechanisms if they strive to realistically simulate biological brains or out-compete other agents