6,201 research outputs found
Neuron as a reward-modulated combinatorial switch and a model of learning behavior
This paper proposes a neuronal circuitry layout and synaptic plasticity
principles that allow the (pyramidal) neuron to act as a "combinatorial
switch". Namely, the neuron learns to be more prone to generate spikes given
those combinations of firing input neurons for which a previous spiking of the
neuron had been followed by a positive global reward signal. The reward signal
may be mediated by certain modulatory hormones or neurotransmitters, e.g., the
dopamine. More generally, a trial-and-error learning paradigm is suggested in
which a global reward signal triggers long-term enhancement or weakening of a
neuron's spiking response to the preceding neuronal input firing pattern. Thus,
rewards provide a feedback pathway that informs neurons whether their spiking
was beneficial or detrimental for a particular input combination. The neuron's
ability to discern specific combinations of firing input neurons is achieved
through a random or predetermined spatial distribution of input synapses on
dendrites that creates synaptic clusters that represent various permutations of
input neurons. The corresponding dendritic segments, or the enclosed individual
spines, are capable of being particularly excited, due to local sigmoidal
thresholding involving voltage-gated channel conductances, if the segment's
excitatory and absence of inhibitory inputs are temporally coincident. Such
nonlinear excitation corresponds to a particular firing combination of input
neurons, and it is posited that the excitation strength encodes the
combinatorial memory and is regulated by long-term plasticity mechanisms. It is
also suggested that the spine calcium influx that may result from the
spatiotemporal synaptic input coincidence may cause the spine head actin
filaments to undergo mechanical (muscle-like) contraction, with the ensuing
cytoskeletal deformation transmitted to the axon initial segment where it
may...Comment: Version 5: added computer code in the ancillary files sectio
The Neural Particle Filter
The robust estimation of dynamically changing features, such as the position
of prey, is one of the hallmarks of perception. On an abstract, algorithmic
level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing
signals based on the history of observations, provides a mathematical framework
for dynamic perception in real time. Since the general, nonlinear filtering
problem is analytically intractable, particle filters are considered among the
most powerful approaches to approximating the solution numerically. Yet, these
algorithms prevalently rely on importance weights, and thus it remains an
unresolved question how the brain could implement such an inference strategy
with a neuronal population. Here, we propose the Neural Particle Filter (NPF),
a weight-less particle filter that can be interpreted as the neuronal dynamics
of a recurrently connected neural network that receives feed-forward input from
sensory neurons and represents the posterior probability distribution in terms
of samples. Specifically, this algorithm bridges the gap between the
computational task of online state estimation and an implementation that allows
networks of neurons in the brain to perform nonlinear Bayesian filtering. The
model captures not only the properties of temporal and multisensory integration
according to Bayesian statistics, but also allows online learning with a
maximum likelihood approach. With an example from multisensory integration, we
demonstrate that the numerical performance of the model is adequate to account
for both filtering and identification problems. Due to the weightless approach,
our algorithm alleviates the 'curse of dimensionality' and thus outperforms
conventional, weighted particle filters in higher dimensions for a limited
number of particles
Adaptive process control in rubber industry
This paper describes the problems and an adaptive solution for process control in rubber industry. We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive. The modeling of the industrial problem is done by the means of artificial neural networks. For the example of the extrusion of a rubber profile in tire production our method shows good results even using only a few training samples
The evolutionary origins of hierarchy
Hierarchical organization -- the recursive composition of sub-modules -- is
ubiquitous in biological networks, including neural, metabolic, ecological, and
genetic regulatory networks, and in human-made systems, such as large
organizations and the Internet. To date, most research on hierarchy in networks
has been limited to quantifying this property. However, an open, important
question in evolutionary biology is why hierarchical organization evolves in
the first place. It has recently been shown that modularity evolves because of
the presence of a cost for network connections. Here we investigate whether
such connection costs also tend to cause a hierarchical organization of such
modules. In computational simulations, we find that networks without a
connection cost do not evolve to be hierarchical, even when the task has a
hierarchical structure. However, with a connection cost, networks evolve to be
both modular and hierarchical, and these networks exhibit higher overall
performance and evolvability (i.e. faster adaptation to new environments).
Additional analyses confirm that hierarchy independently improves adaptability
after controlling for modularity. Overall, our results suggest that the same
force--the cost of connections--promotes the evolution of both hierarchy and
modularity, and that these properties are important drivers of network
performance and adaptability. In addition to shedding light on the emergence of
hierarchy across the many domains in which it appears, these findings will also
accelerate future research into evolving more complex, intelligent
computational brains in the fields of artificial intelligence and robotics.Comment: 32 page
Nonoptimal Component Placement, but Short Processing Paths, due to Long-Distance Projections in Neural Systems
It has been suggested that neural systems across several scales of
organization show optimal component placement, in which any spatial
rearrangement of the components would lead to an increase of total wiring.
Using extensive connectivity datasets for diverse neural networks combined with
spatial coordinates for network nodes, we applied an optimization algorithm to
the network layouts, in order to search for wire-saving component
rearrangements. We found that optimized component rearrangements could
substantially reduce total wiring length in all tested neural networks.
Specifically, total wiring among 95 primate (Macaque) cortical areas could be
decreased by 32%, and wiring of neuronal networks in the nematode
Caenorhabditis elegans could be reduced by 48% on the global level, and by 49%
for neurons within frontal ganglia. Wiring length reductions were possible due
to the existence of long-distance projections in neural networks. We explored
the role of these projections by comparing the original networks with minimally
rewired networks of the same size, which possessed only the shortest possible
connections. In the minimally rewired networks, the number of processing steps
along the shortest paths between components was significantly increased
compared to the original networks. Additional benchmark comparisons also
indicated that neural networks are more similar to network layouts that
minimize the length of processing paths, rather than wiring length. These
findings suggest that neural systems are not exclusively optimized for minimal
global wiring, but for a variety of factors including the minimization of
processing steps.Comment: 11 pages, 5 figure
Visual motion processing and human tracking behavior
The accurate visual tracking of a moving object is a human fundamental skill
that allows to reduce the relative slip and instability of the object's image
on the retina, thus granting a stable, high-quality vision. In order to
optimize tracking performance across time, a quick estimate of the object's
global motion properties needs to be fed to the oculomotor system and
dynamically updated. Concurrently, performance can be greatly improved in terms
of latency and accuracy by taking into account predictive cues, especially
under variable conditions of visibility and in presence of ambiguous retinal
information. Here, we review several recent studies focusing on the integration
of retinal and extra-retinal information for the control of human smooth
pursuit.By dynamically probing the tracking performance with well established
paradigms in the visual perception and oculomotor literature we provide the
basis to test theoretical hypotheses within the framework of dynamic
probabilistic inference. We will in particular present the applications of
these results in light of state-of-the-art computer vision algorithms
Using RBF nets in rubber industry process control
This paper describes the use of a radial basis function (RBF) neural network. It approximates the process parameters for the extrusion of a rubber profile used in tyre production. After introducing the problem, we describe the RBF net algorithm and the modeling of the industrial problem. The algorithm shows good results even using only a few training samples. It turns out that the „curse of dimensions“ plays an important role in the model. The paper concludes by a discussion of possible systematic error influences and improvements
- …