412 research outputs found
Discrete and fuzzy dynamical genetic programming in the XCSF learning classifier system
A number of representation schemes have been presented for use within
learning classifier systems, ranging from binary encodings to neural networks.
This paper presents results from an investigation into using discrete and fuzzy
dynamical system representations within the XCSF learning classifier system. In
particular, asynchronous random Boolean networks are used to represent the
traditional condition-action production system rules in the discrete case and
asynchronous fuzzy logic networks in the continuous-valued case. It is shown
possible to use self-adaptive, open-ended evolution to design an ensemble of
such dynamical systems within XCSF to solve a number of well-known test
problems
Effect of dilution in asymmetric recurrent neural networks
We study with numerical simulation the possible limit behaviors of
synchronous discrete-time deterministic recurrent neural networks composed of N
binary neurons as a function of a network's level of dilution and asymmetry.
The network dilution measures the fraction of neuron couples that are
connected, and the network asymmetry measures to what extent the underlying
connectivity matrix is asymmetric. For each given neural network, we study the
dynamical evolution of all the different initial conditions, thus
characterizing the full dynamical landscape without imposing any learning rule.
Because of the deterministic dynamics, each trajectory converges to an
attractor, that can be either a fixed point or a limit cycle. These attractors
form the set of all the possible limit behaviors of the neural network. For
each network, we then determine the convergence times, the limit cycles'
length, the number of attractors, and the sizes of the attractors' basin. We
show that there are two network structures that maximize the number of possible
limit behaviors. The first optimal network structure is fully-connected and
symmetric. On the contrary, the second optimal network structure is highly
sparse and asymmetric. The latter optimal is similar to what observed in
different biological neuronal circuits. These observations lead us to
hypothesize that independently from any given learning model, an efficient and
effective biologic network that stores a number of limit behaviors close to its
maximum capacity tends to develop a connectivity structure similar to one of
the optimal networks we found.Comment: 31 pages, 5 figure
A model of the emergence and evolution of integrated worldviews
It \ud
is proposed that the ability of humans to flourish in diverse \ud
environments and evolve complex cultures reflects the following two \ud
underlying cognitive transitions. The transition from the \ud
coarse-grained associative memory of Homo habilis to the \ud
fine-grained memory of Homo erectus enabled limited \ud
representational redescription of perceptually similar episodes, \ud
abstraction, and analytic thought, the last of which is modeled as \ud
the formation of states and of lattices of properties and contexts \ud
for concepts. The transition to the modern mind of Homo \ud
sapiens is proposed to have resulted from onset of the capacity to \ud
spontaneously and temporarily shift to an associative mode of thought \ud
conducive to interaction amongst seemingly disparate concepts, \ud
modeled as the forging of conjunctions resulting in states of \ud
entanglement. The fruits of associative thought became ingredients \ud
for analytic thought, and vice versa. The ratio of \ud
associative pathways to concepts surpassed a percolation threshold \ud
resulting in the emergence of a self-modifying, integrated internal \ud
model of the world, or worldview
- …