19,702 research outputs found
Connecting adaptive behaviour and expectations in models of innovation: The Potential Role of Artificial Neural Networks
In this methodological work I explore the possibility of explicitly modelling expectations conditioning the R&D decisions of firms. In order to isolate this problem from the controversies of cognitive science, I propose a black box strategy through the concept of “internal model”. The last part of the article uses artificial neural networks to model the expectations of firms in a model of industry dynamics based on Nelson & Winter (1982)
Learning, Generalization, and Functional Entropy in Random Automata Networks
It has been shown \citep{broeck90:physicalreview,patarnello87:europhys} that
feedforward Boolean networks can learn to perform specific simple tasks and
generalize well if only a subset of the learning examples is provided for
learning. Here, we extend this body of work and show experimentally that random
Boolean networks (RBNs), where both the interconnections and the Boolean
transfer functions are chosen at random initially, can be evolved by using a
state-topology evolution to solve simple tasks. We measure the learning and
generalization performance, investigate the influence of the average node
connectivity , the system size , and introduce a new measure that allows
to better describe the network's learning and generalization behavior. We show
that the connectivity of the maximum entropy networks scales as a power-law of
the system size . Our results show that networks with higher average
connectivity (supercritical) achieve higher memorization and partial
generalization. However, near critical connectivity, the networks show a higher
perfect generalization on the even-odd task
The evolution of representation in simple cognitive networks
Representations are internal models of the environment that can provide
guidance to a behaving agent, even in the absence of sensory information. It is
not clear how representations are developed and whether or not they are
necessary or even essential for intelligent behavior. We argue here that the
ability to represent relevant features of the environment is the expected
consequence of an adaptive process, give a formal definition of representation
based on information theory, and quantify it with a measure R. To measure how R
changes over time, we evolve two types of networks---an artificial neural
network and a network of hidden Markov gates---to solve a categorization task
using a genetic algorithm. We find that the capacity to represent increases
during evolutionary adaptation, and that agents form representations of their
environment during their lifetime. This ability allows the agents to act on
sensorial inputs in the context of their acquired representations and enables
complex and context-dependent behavior. We examine which concepts (features of
the environment) our networks are representing, how the representations are
logically encoded in the networks, and how they form as an agent behaves to
solve a task. We conclude that R should be able to quantify the representations
within any cognitive system, and should be predictive of an agent's long-term
adaptive success.Comment: 36 pages, 10 figures, one Tabl
A comprehensible SOM-based scoring system.
The significant growth of consumer credit has resulted in a wide range of statistical and non-statistical methods for classifying applicants in 'good' and 'bad' risk categories. Traditionally, (logistic) regression used to be one of the most popular methods for this task, but recently some newer techniques like neural networks and support vector machines have shown excellent classification performance. Self-organizing maps (SOMs) have existed for decades and although they have been used in various application areas, only little research has been done to investigate their appropriateness for credit scoring. In this paper, it is shown how a trained SOM can be used for classification and how the basic SOM-algorithm can be integrated with supervised techniques like the multi-layered perceptron. Classification accuracy of the models is benchmarked with results reported previously.Decision; Knowledge; Knowledge discovery; Systems; Growth; Credit; Methods; Risk; Regression; Neural networks; Networks; Classification; Performance; Area; Research; Credit scoring; Models; Model;
Evolution and Analysis of Embodied Spiking Neural Networks Reveals Task-Specific Clusters of Effective Networks
Elucidating principles that underlie computation in neural networks is
currently a major research topic of interest in neuroscience. Transfer Entropy
(TE) is increasingly used as a tool to bridge the gap between network
structure, function, and behavior in fMRI studies. Computational models allow
us to bridge the gap even further by directly associating individual neuron
activity with behavior. However, most computational models that have analyzed
embodied behaviors have employed non-spiking neurons. On the other hand,
computational models that employ spiking neural networks tend to be restricted
to disembodied tasks. We show for the first time the artificial evolution and
TE-analysis of embodied spiking neural networks to perform a
cognitively-interesting behavior. Specifically, we evolved an agent controlled
by an Izhikevich neural network to perform a visual categorization task. The
smallest networks capable of performing the task were found by repeating
evolutionary runs with different network sizes. Informational analysis of the
best solution revealed task-specific TE-network clusters, suggesting that
within-task homogeneity and across-task heterogeneity were key to behavioral
success. Moreover, analysis of the ensemble of solutions revealed that
task-specificity of TE-network clusters correlated with fitness. This provides
an empirically testable hypothesis that links network structure to behavior.Comment: Camera ready version of accepted for GECCO'1
- …