34,654 research outputs found
Computational Aspects of Feedback in Neural Circuits
It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on
continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We
investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but
in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the
case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes
the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any
conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational
model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We
demonstrate these computational implications of feedback both theoretically, and through computer simulations of
detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the
application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables
such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike
trains over longer periods of time, and to process new information contained in such spike trains in diverse ways
according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with
feedback provide a new model for working memory that is consistent with a large set of biological constraints.
Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical
principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also throw new light on the computational role of feedback in other complex biological dynamical systems, such as, for example, genetic regulatory networks
Towards a Theory of the Laminar Architecture of Cerebral Cortex: Computational Clues from the Visual System
One of the most exciting and open research frontiers in neuroscience is that of seeking to understand the functional roles of the layers of cerebral cortex. New experimental techniques for probing the laminar circuitry of cortex have recently been developed, opening up novel opportunities for investigating ho1v its six-layered architecture contributes to perception and cognition. The task of trying to interpret this complex structure can be facilitated by theoretical analyses of the types of computations that cortex is carrying out, and of how these might be implemented in specific cortical circuits. We have recently developed a detailed neural model of how the parvocellular stream of the visual cortex utilizes its feedforward, feedback, and horizontal interactions for purposes of visual filtering, attention, and perceptual grouping. This model, called LAMINART, shows how these perceptual processes relate to the mechanisms which ensure stable development of cortical circuits in the infant, and to the continued stability of learning in the adult. The present article reviews this laminar theory of visual cortex, considers how it may be generalized towards a more comprehensive theory that encompasses other cortical areas and cognitive processes, and shows how its laminar framework generates a variety of testable predictions.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-0409); National Science Foundation (IRI 94-01659); Office of Naval Research (N00014-92-1-1309, N00014-95-1-0657
How Does Our Visual System Achieve Shift and Size Invariance?
The question of shift and size invariance in the primate
visual system is discussed. After a short review of the relevant neurobiology and psychophysics, a more detailed analysis of computational models is given. The two main types of networks considered are the dynamic routing circuit model and invariant feature networks, such as the neocognitron. Some specific open questions in context of these models are raised and possible solutions discussed
Collective stability of networks of winner-take-all circuits
The neocortex has a remarkably uniform neuronal organization, suggesting that
common principles of processing are employed throughout its extent. In
particular, the patterns of connectivity observed in the superficial layers of
the visual cortex are consistent with the recurrent excitation and inhibitory
feedback required for cooperative-competitive circuits such as the soft
winner-take-all (WTA). WTA circuits offer interesting computational properties
such as selective amplification, signal restoration, and decision making. But,
these properties depend on the signal gain derived from positive feedback, and
so there is a critical trade-off between providing feedback strong enough to
support the sophisticated computations, while maintaining overall circuit
stability. We consider the question of how to reason about stability in very
large distributed networks of such circuits. We approach this problem by
approximating the regular cortical architecture as many interconnected
cooperative-competitive modules. We demonstrate that by properly understanding
the behavior of this small computational module, one can reason over the
stability and convergence of very large networks composed of these modules. We
obtain parameter ranges in which the WTA circuit operates in a high-gain
regime, is stable, and can be aggregated arbitrarily to form large stable
networks. We use nonlinear Contraction Theory to establish conditions for
stability in the fully nonlinear case, and verify these solutions using
numerical simulations. The derived bounds allow modes of operation in which the
WTA network is multi-stable and exhibits state-dependent persistent activities.
Our approach is sufficiently general to reason systematically about the
stability of any network, biological or technological, composed of networks of
small modules that express competition through shared inhibition.Comment: 7 Figure
The body as a reservoir: locomotion and sensing with linear feedback
It is known that mass-spring nets have computational power and can be trained to reproduce oscillating patterns.
In this work, we extend this idea to locomotion and sensing. We simulate systems made out of bars and springs and show that stable gaits can be maintained by these structures with only linear feedback.
We then conduct a classification experiment in which the system has to distinguish terrains while maintaining an oscillatory pattern.
These experiments indicate that the control of compliant robots can be simplified if one exploits the computational power of the body’s dynamics
- …