41,371 research outputs found

    Wiring optimization explanation in neuroscience: What is Special about it?

    Get PDF
    This paper examines the explanatory distinctness of wiring optimization models in neuroscience. Wiring optimization models aim to represent the organizational features of neural and brain systems as optimal (or near-optimal) solutions to wiring optimization problems. My claim is that that wiring optimization models provide design explanations. In particular, they support ideal interventions on the decision variables of the relevant design problem and assess the impact of such interventions on the viability of the target system

    Evolutionary Neural Gas (ENG): A Model of Self Organizing Network from Input Categorization

    Full text link
    Despite their claimed biological plausibility, most self organizing networks have strict topological constraints and consequently they cannot take into account a wide range of external stimuli. Furthermore their evolution is conditioned by deterministic laws which often are not correlated with the structural parameters and the global status of the network, as it should happen in a real biological system. In nature the environmental inputs are noise affected and fuzzy. Which thing sets the problem to investigate the possibility of emergent behaviour in a not strictly constrained net and subjected to different inputs. It is here presented a new model of Evolutionary Neural Gas (ENG) with any topological constraints, trained by probabilistic laws depending on the local distortion errors and the network dimension. The network is considered as a population of nodes that coexist in an ecosystem sharing local and global resources. Those particular features allow the network to quickly adapt to the environment, according to its dimensions. The ENG model analysis shows that the net evolves as a scale-free graph, and justifies in a deeply physical sense- the term gas here used.Comment: 16 pages, 8 figure

    Coordinated optimization of visual cortical maps : 2. Numerical studies

    Get PDF
    In the juvenile brain, the synaptic architecture of the visual cortex remains in a state of flux for months after the natural onset of vision and the initial emergence of feature selectivity in visual cortical neurons. It is an attractive hypothesis that visual cortical architecture is shaped during this extended period of juvenile plasticity by the coordinated optimization of multiple visual cortical maps such as orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we introduced a class of analytically tractable coordinated optimization models and solved representative examples, in which a spatially complex organization of the OP map is induced by interactions between the maps. We found that these solutions near symmetry breaking threshold predict a highly ordered map layout. Here we examine the time course of the convergence towards attractor states and optima of these models. In particular, we determine the timescales on which map optimization takes place and how these timescales can be compared to those of visual cortical development and plasticity. We also assess whether our models exhibit biologically more realistic, spatially irregular solutions at a finite distance from threshold, when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. We show that, although maps typically undergo substantial rearrangement, no other solutions than pinwheel crystals and stripes dominate in the emerging layouts. Pinwheel crystallization takes place on a rather short timescale and can also occur for detuned wavelengths of different maps. Our numerical results thus support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the architecture of the visual cortex. We discuss several alternative scenarios that may improve the agreement between model solutions and biological observations

    Learning with Delayed Synaptic Plasticity

    Get PDF
    The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and reinforcement signals. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse to keep track of the activation of the neurons. Delayed reinforcement signals are provided after each episode relative to the networks' performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm that does not incorporate domain knowledge introduced with the NATs, and show that the synaptic updates performed by the DSP rules demonstrate more effective training performance relative to the HC algorithm.Comment: GECCO201

    Characterizing Self-Developing Biological Neural Networks: A First Step Towards their Application To Computing Systems

    Get PDF
    Carbon nanotubes are often seen as the only alternative technology to silicon transistors. While they are the most likely short-term one, other longer-term alternatives should be studied as well. While contemplating biological neurons as an alternative component may seem preposterous at first sight, significant recent progress in CMOS-neuron interface suggests this direction may not be unrealistic; moreover, biological neurons are known to self-assemble into very large networks capable of complex information processing tasks, something that has yet to be achieved with other emerging technologies. The first step to designing computing systems on top of biological neurons is to build an abstract model of self-assembled biological neural networks, much like computer architects manipulate abstract models of transistors and circuits. In this article, we propose a first model of the structure of biological neural networks. We provide empirical evidence that this model matches the biological neural networks found in living organisms, and exhibits the small-world graph structure properties commonly found in many large and self-organized systems, including biological neural networks. More importantly, we extract the simple local rules and characteristics governing the growth of such networks, enabling the development of potentially large but realistic biological neural networks, as would be needed for complex information processing/computing tasks. Based on this model, future work will be targeted to understanding the evolution and learning properties of such networks, and how they can be used to build computing systems
    • …
    corecore