719 research outputs found
On probabilistic analog automata
We consider probabilistic automata on a general state space and study their
computational power. The model is based on the concept of language recognition
by probabilistic automata due to Rabin and models of analog computation in a
noisy environment suggested by Maass and Orponen, and Maass and Sontag. Our
main result is a generalization of Rabin's reduction theorem that implies that
under very mild conditions, the computational power of the automaton is limited
to regular languages
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
State-Dependent Computation Using Coupled Recurrent Networks
Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how
networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent
networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state iswithdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit
Computational Aspects of Feedback in Neural Circuits
It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on
continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We
investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but
in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the
case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes
the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any
conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational
model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We
demonstrate these computational implications of feedback both theoretically, and through computer simulations of
detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the
application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables
such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike
trains over longer periods of time, and to process new information contained in such spike trains in diverse ways
according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with
feedback provide a new model for working memory that is consistent with a large set of biological constraints.
Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical
principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also throw new light on the computational role of feedback in other complex biological dynamical systems, such as, for example, genetic regulatory networks
Theory and Practice of Computing with Excitable Dynamics
Reservoir computing (RC) is a promising paradigm for time series processing. In this paradigm, the desired output is computed by combining measurements of an excitable system that responds to time-dependent exogenous stimuli. The excitable system is called a reservoir and measurements of its state are combined using a readout layer to produce a target output. The power of RC is attributed to an emergent short-term memory in dynamical systems and has been analyzed mathematically for both linear and nonlinear dynamical systems. The theory of RC treats only the macroscopic properties of the reservoir, without reference to the underlying medium it is made of. As a result, RC is particularly attractive for building computational devices using emerging technologies whose structure is not exactly controllable, such as self-assembled nanoscale circuits. RC has lacked a formal framework for performance analysis and prediction that goes beyond memory properties. To provide such a framework, here a mathematical theory of memory and information processing in ordered and disordered linear dynamical systems is developed. This theory analyzes the optimal readout layer for a given task. The focus of the theory is a standard model of RC, the echo state network (ESN). An ESN consists of a fixed recurrent neural network that is driven by an external signal. The dynamics of the network is then combined linearly with readout weights to produce the desired output. The readout weights are calculated using linear regression.
Using an analysis of regression equations, the readout weights can be calculated using only the statistical properties of the reservoir dynamics, the input signal, and the desired output. The readout layer weights can be calculated from a priori knowledge of the desired function to be computed and the weight matrix of the reservoir. This formulation explicitly depends on the input weights, the reservoir weights, and the statistics of the target function. This formulation is used to bound the expected error of the system for a given target function. The effects of input-output correlation and complex network structure in the reservoir on the computational performance of the system have been mathematically characterized. Far from the chaotic regime, ordered linear networks exhibit a homogeneous decay of memory in different dimensions, which keeps the input history coherent. As disorder is introduced in the structure of the network, memory decay becomes inhomogeneous along different dimensions causing decoherence in the input history, and degradation in task-solving performance. Close to the chaotic regime, the ordered systems show loss of temporal information in the input history, and therefore inability to solve tasks. However, by introducing disorder and therefore heterogeneous decay of memory the temporal information of input history is preserved and the task-solving performance is recovered. Thus for systems at the edge of chaos, disordered structure may enhance temporal information processing. Although the current framework only applies to linear systems, in principle it can be used to describe the properties of physical reservoir computing, e.g., photonic RC using short coherence-length light
- …