18,381 research outputs found
A Cognitive Science Based Machine Learning Architecture
In an attempt to illustrate the application of cognitive science principles to hard AI problems in machine learning we propose the LIDA technology, a cognitive science based architecture capable of more human-like learning. A LIDA based software agent or cognitive robot will be capable of three fundamental, continuously active, humanlike learning mechanisms:\ud
1) perceptual learning, the learning of new objects, categories, relations, etc.,\ud
2) episodic learning of events, the what, where, and when,\ud
3) procedural learning, the learning of new actions and action sequences with which to accomplish new tasks. The paper argues for the use of modular components, each specializing in implementing individual facets of human and animal cognition, as a viable approach towards achieving general intelligence
Synthetic Semiotics: on modelling and simulating the \ud emergence of sign processes
Based on formal-theoretical principles about the \ud
sign processes involved, we have built synthetic experiments \ud
to investigate the emergence of communication based on \ud
symbols and indexes in a distributed system of sign users, \ud
following theoretical constraints from C.S.Peirce theory of \ud
signs, following a Synthetic Semiotics approach. In this paper, we summarize these computational experiments and results regarding associative learning processes of symbolic sign modality and cognitive conditions in an evolutionary process for the emergence of either symbol-based or index-based communication
Neural Networks Architecture Evaluation in a Quantum Computer
In this work, we propose a quantum algorithm to evaluate neural networks
architectures named Quantum Neural Network Architecture Evaluation (QNNAE). The
proposed algorithm is based on a quantum associative memory and the learning
algorithm for artificial neural networks. Unlike conventional algorithms for
evaluating neural network architectures, QNNAE does not depend on
initialization of weights. The proposed algorithm has a binary output and
results in 0 with probability proportional to the performance of the network.
And its computational cost is equal to the computational cost to train a neural
network
Self-directedness, integration and higher cognition
In this paper I discuss connections between self-directedness, integration and higher cognition. I present a model of self-directedness as a basis for approaching higher cognition from a situated cognition perspective. According to this model increases in sensorimotor complexity create pressure for integrative higher order control and learning processes for acquiring information about the context in which action occurs. This generates complex articulated abstractive information processing, which forms the major basis for higher cognition. I present evidence that indicates that the same integrative characteristics found in lower cognitive process such as motor adaptation are present in a range of higher cognitive process, including conceptual learning. This account helps explain situated cognition phenomena in humans because the integrative processes by which the brain adapts to control interaction are relatively agnostic concerning the source of the structure participating in the process. Thus, from the perspective of the motor control system using a tool is not fundamentally different to simply controlling an arm
Economic growth, innovation systems, and institutional change: a trilogy in five parts
Development and growth are products of the interplay and interaction among heterogeneous actors operating in specific institutional settings. There is a much alluded-to, but under-investigated, link between economic growth, innovation systems, and institutions. There is widespread agreement among most economists on the positive reinforcing link between innovation and growth. However, the importance of institutions as catalysts in this link has not been adequately examined. The concept of innovation systems has the potential to fill this gap. But these studies have not conducted in-depth institutional analyses or focussed on institutional transformation processes, thereby failing to link growth theory to the substantive institutional tradition in economics. In this paper we draw attention to the main shortcomings of orthodox and heterodox growth theories, some of which have been addressed by the more descriptive literature on innovation systems. Critical overviews of the literatures on growth and innovation systems are used as a foundation to propose a new perspective on the role of institutions and a framework for conducting institutional analysis using a multi-dimensional typology of institutions. The framework is then applied to cases of Taiwan and South Korea to highlight the instrumental role played by institutions in facilitating and curtailing economic development and growth
A model of the emergence and evolution of integrated worldviews
It \ud
is proposed that the ability of humans to flourish in diverse \ud
environments and evolve complex cultures reflects the following two \ud
underlying cognitive transitions. The transition from the \ud
coarse-grained associative memory of Homo habilis to the \ud
fine-grained memory of Homo erectus enabled limited \ud
representational redescription of perceptually similar episodes, \ud
abstraction, and analytic thought, the last of which is modeled as \ud
the formation of states and of lattices of properties and contexts \ud
for concepts. The transition to the modern mind of Homo \ud
sapiens is proposed to have resulted from onset of the capacity to \ud
spontaneously and temporarily shift to an associative mode of thought \ud
conducive to interaction amongst seemingly disparate concepts, \ud
modeled as the forging of conjunctions resulting in states of \ud
entanglement. The fruits of associative thought became ingredients \ud
for analytic thought, and vice versa. The ratio of \ud
associative pathways to concepts surpassed a percolation threshold \ud
resulting in the emergence of a self-modifying, integrated internal \ud
model of the world, or worldview
Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures
We investigate fundamental decisions in the design of instruction set
architectures for linear genetic programs that are used as both model systems
in evolutionary biology and underlying solution representations in evolutionary
computation. We subjected digital organisms with each tested architecture to
seven different computational environments designed to present a range of
evolutionary challenges. Our goal was to engineer a general purpose
architecture that would be effective under a broad range of evolutionary
conditions. We evaluated six different types of architectural features for the
virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more
precisely modify the function of genetic instructions, (2) memory: we provided
an increased number of registers in the virtual CPUs, (3) decoupled sensors and
actuators: we separated input and output operations to enable greater control
over data flow. We also tested a variety of methods to regulate expression: (4)
explicit labels that allow programs to dynamically refer to specific genome
positions, (5) position-relative search instructions, and (6) multiple new flow
control instructions, including conditionals and jumps. Each of these features
also adds complication to the instruction set and risks slowing evolution due
to epistatic interactions. Two features (multiple argument specification and
separated I/O) demonstrated substantial improvements int the majority of test
environments. Some of the remaining tested modifications were detrimental,
thought most exhibit no systematic effects on evolutionary potential,
highlighting the robustness of digital evolution. Combined, these observations
enhance our understanding of how instruction architecture impacts evolutionary
potential, enabling the creation of architectures that support more rapid
evolution of complex solutions to a broad range of challenges
- …