29 research outputs found
An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits
On the Executability of Interactive Computation
The model of interactive Turing machines (ITMs) has been proposed to
characterise which stream translations are interactively computable; the model
of reactive Turing machines (RTMs) has been proposed to characterise which
behaviours are reactively executable. In this article we provide a comparison
of the two models. We show, on the one hand, that the behaviour exhibited by
ITMs is reactively executable, and, on the other hand, that the stream
translations naturally associated with RTMs are interactively computable. We
conclude from these results that the theory of reactive executability subsumes
the theory of interactive computability. Inspired by the existing model of ITMs
with advice, which provides a model of evolving computation, we also consider
RTMs with advice and we establish that a facility of advice considerably
upgrades the behavioural expressiveness of RTMs: every countable transition
system can be simulated by some RTM with advice up to a fine notion of
behavioural equivalence.Comment: 15 pages, 0 figure
Can biological quantum networks solve NP-hard problems?
There is a widespread view that the human brain is so complex that it cannot
be efficiently simulated by universal Turing machines. During the last decades
the question has therefore been raised whether we need to consider quantum
effects to explain the imagined cognitive power of a conscious mind.
This paper presents a personal view of several fields of philosophy and
computational neurobiology in an attempt to suggest a realistic picture of how
the brain might work as a basis for perception, consciousness and cognition.
The purpose is to be able to identify and evaluate instances where quantum
effects might play a significant role in cognitive processes.
Not surprisingly, the conclusion is that quantum-enhanced cognition and
intelligence are very unlikely to be found in biological brains. Quantum
effects may certainly influence the functionality of various components and
signalling pathways at the molecular level in the brain network, like ion
ports, synapses, sensors, and enzymes. This might evidently influence the
functionality of some nodes and perhaps even the overall intelligence of the
brain network, but hardly give it any dramatically enhanced functionality. So,
the conclusion is that biological quantum networks can only approximately solve
small instances of NP-hard problems.
On the other hand, artificial intelligence and machine learning implemented
in complex dynamical systems based on genuine quantum networks can certainly be
expected to show enhanced performance and quantum advantage compared with
classical networks. Nevertheless, even quantum networks can only be expected to
efficiently solve NP-hard problems approximately. In the end it is a question
of precision - Nature is approximate.Comment: 38 page
The Super-Turing Computational Power of Interactive Evolving Recurrent Neural Networks
Understanding the dynamical and computational capabilities of neural models represents an issue of central importance. Here, we consider a model of first-order recurrent neural networks provided with the possibility to evolve over time and involved in a basic interactive and memory active computational paradigm. In this context, we prove that the so-called interactive evolving recurrent neural networks are computationally equivalent to interactive Turing machines with advice, hence capable of super-Turing potentialities. We further provide a precise characterisation of the ω-translations realised by these networks. Therefore, the consideration of evolving capabilities in a first-order neural model provides the potentiality to break the Turing barrier
The Algebraic Counterpart of the Wagner Hierarchy
The algebraic study of formal languages shows that ω-rational languages are exactly the sets recognizable by finite ω-semigroups. Within this framework, we provide a construction of the algebraic counterpart of the Wagner hierarchy. We adopt a hierarchical game approach, by translating the Wadge theory from the ω-rational language to the ω-semigroup context.
More precisely, we first define a reduction relation on finite pointed ω-semigroups by means of a Wadge-like infinite two-player game. The collection of these algebraic structures ordered by this reduction is then proven to be isomorphic to the Wagner hierarchy, namely a decidable and well-founded partial ordering of width 2 and height ωω
Expressive Power of Non-deterministic Evolving Recurrent Neural Networks in Terms of Their Attractor Dynamics
We provide a characterization of the expressive powers of several models of nondeterministic recurrent neural networks according to their attractor dynamics. More precisely, we consider two forms of nondeterministic neural networks. In the first case, nondeterminism is expressed as an external binary guess stream processed by means of an additional Boolean guess cell. In the second case, nondeterminism is expressed as a set of possible evolving patterns that the synaptic connections of the network might follow over the successive time steps. In these two contexts, ten models of nondeterministic neural networks are considered, according to the nature of their synaptic weights. Overall, we prove that the static rational-weighted neural networks of type 1 are computationally equivalent to nondeterministic Muller Turing machines. They recognize the class of all effectively analytic (Sigma(1)(1) lightface) sets. The nine other models of analog and/or evolving neural networks of types 1 and 2 are all computationally equivalent to each other, and strictly more powerful than nondeterministic Muller Turing machines. They recognize the class of all analytic (Sigma(1)(1) boldface) sets
An infinite game on omega-semigroups
Jean-Eric Pin introduced the structure of an ´ ω-semigroup in [PerPin04] as an algebraic counterpart to the concept of automaton reading infinite words. It has been well studied since, specially by Carton, Perrin [CarPer97] and [CarPer99], and Wilke. We introduce a reduction relation on subsets of ω-semigroups defined by way of an infinite two-player game. Both Wadge hierarchy and Wagner hierarchy become special cases of the hierarchy induced by this reduction relation. But on the other hand, set theoretical properties that occur naturally when studying these hierarchies, happen to have a decisive algebraic counterpart. A game theoretical characterization of basic algebraic concepts follows