506 research outputs found
Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience
Physical symbol systems are needed for open-ended cognition. A good way to
understand physical symbol systems is by comparison of thought to chemistry.
Both have systematicity, productivity and compositionality. The state of the
art in cognitive architectures for open-ended cognition is critically assessed.
I conclude that a cognitive architecture that evolves symbol structures in the
brain is a promising candidate to explain open-ended cognition. Part 2 of the
paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living
Machines 2013 Natural History Museum, Londo
Rethinking the Physical Symbol Systems Hypothesis
It is now more than a half-century since the Physical Symbol Systems
Hypothesis (PSSH) was first articulated as an empirical hypothesis. More recent
evidence from work with neural networks and cognitive architectures has
weakened it, but it has not yet been replaced in any satisfactory manner. Based
on a rethinking of the nature of computational symbols -- as atoms or
placeholders -- and thus also of the systems in which they participate, a
hybrid approach is introduced that responds to these challenges while also
helping to bridge the gap between symbolic and neural approaches, resulting in
two new hypotheses, one that is to replace the PSSH and other focused more
directly on cognitive architectures.Comment: Final version published at the the 16th Annual AGI Conference, 202
Meaning, autonomy, symbolic causality, and free will
As physical entities that translate symbols into physical actions, computers offer insights
into the nature of meaning and agency.
• Physical symbol systems, generically known as agents, link abstractions to material actions.
The meaning of a symbol is defined as the physical actions an agent takes when the symbol is
encountered.
• An agent has autonomy when it has the power to select actions based on internal decision
processes. Autonomy offers a partial escape from constraints imposed by direct physical
influences such as gravity and the transfer of momentum. Swimming upstream is an
example.
• Symbols are names that can designate other entities. It appears difficult to explain the use of
names and symbols in terms of more primitive functionality. The ability to use names and
symbols, i.e., symbol grounding, may be a fundamental cognitive building block.
• The standard understanding of causality—wiggling X results in Y wiggling—applies to both
physical causes (e.g., one billiard ball hitting another) and symbolic causes (e.g., a traffic light
changing color). Because symbols are abstract, they cannot produce direct physical effects.
For a symbol to be a cause requires that the affected entity determine its own response. This
is called autonomous causality.
• This analysis of meaning and autonomy offers new perspectives on free will
Hierarchical modularity in human brain functional networks
The idea that complex systems have a hierarchical modular organization
originates in the early 1960s and has recently attracted fresh support from
quantitative studies of large scale, real-life networks. Here we investigate
the hierarchical modular (or "modules-within-modules") decomposition of human
brain functional networks, measured using functional magnetic resonance imaging
(fMRI) in 18 healthy volunteers under no-task or resting conditions. We used a
customized template to extract networks with more than 1800 regional nodes, and
we applied a fast algorithm to identify nested modular structure at several
hierarchical levels. We used mutual information, 0 < I < 1, to estimate the
similarity of community structure of networks in different subjects, and to
identify the individual network that is most representative of the group.
Results show that human brain functional networks have a hierarchical modular
organization with a fair degree of similarity between subjects, I=0.63. The
largest 5 modules at the highest level of the hierarchy were medial occipital,
lateral occipital, central, parieto-frontal and fronto-temporal systems;
occipital modules demonstrated less sub-modular organization than modules
comprising regions of multimodal association cortex. Connector nodes and hubs,
with a key role in inter-modular connectivity, were also concentrated in
association cortical areas. We conclude that methods are available for
hierarchical modular decomposition of large numbers of high resolution brain
functional networks using computationally expedient algorithms. This could
enable future investigations of Simon's original hypothesis that hierarchy or
near-decomposability of physical symbol systems is a critical design feature
for their fast adaptivity to changing environmental conditions
Algebraic symbolism as a conceptual barrier in learning mathematics
The use of symbolism in mathematics is probably the mostly quoted reason people use for explaining their lack of understanding and difficulties in learning mathematics. We will consider symbolism as a conceptual barrier drawing on some recent findings in historical epistemology and cognitive psychology. Instead of relying on the narrow psychological interpretation of epistemic obstacles we use the barrier for situating symbolism in the ‘ontogeny recapitulates phylogeny’-debate. Drawing on a recent study within historical epistemology we show how early symbolism functioned in a way similar to concrete operational schemes. Furthermore we will discuss several studies from cognitive psychology which come to the conclusion that symbolism is not as abstract and arbitrary as one considers but often relies on perceptually organized grouping and concrete spatial relations. We will use operations on fractions to show that the reliance on concrete spatial operations also provides opportunities for teaching. We will conclude arguing that a better conceptual understanding of symbolism by teachers will prepare them for possible difficulties that students will be confronted with in the classroom
Simon's Bounded Rationality. Origins and use in economic theory
The paper aims to show how Simon's notion of bounded rationality should be interpreted in the light of its connection with artificial intelligence. This connection points out that bounded rationality is a highly structured concept, and sheds light on several implications of Simon's general views on rationality. Finally, offering three paradigmatic examples, the artic1e presents the view that recent approaches, which refer to Simon's heterodox theory, only partially accept the teachings of their inspirer, splitting bounded rationality from the context of artificl al intelligence.
Symbol grounding and its implications for artificial intelligence
In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number of ways. Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for "understanding" to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of present-day digital computers, he cannot refute computationalism in general
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
ニンチ シンリガク トワ ナニカ イチコウサツ
There are 2 areas in the cognitive psychology. In an important paper, Newell (1990) described some properties in SOAR of Cognitive Science. Newell notes that computation is necessarily local-internal; symbols are therefore needed to represent the external world. Newell\u27s SOAR architecture is the most ambitions PSS (Physical Symbol Systems) to date. SOAR uses a production system as its foundation, and functions as a recognized-act system. Modurarity has been offered as another framework for studying the fundamental level of behavior. Ecological approach rejects the notion that perception is a form of knowing. Ecological realists reject the processes of the perceiver
- …