60,782 research outputs found
Tacit Representations and Artificial Intelligence: Hidden Lessons from an Embodied Perspective on Cognition
In this paper, I explore how an embodied perspective on cognition might
inform research on artificial intelligence. Many embodied cognition theorists object
to the central role that representations play on the traditional view of cognition.
Based on these objections, it may seem that the lesson from embodied cognition
is that AI should abandon representation as a central component of intelligence.
However, I argue that the lesson from embodied cognition is actually that AI
research should shift its focus from how to utilize explicit representations to how
to create and use tacit representations. To develop this suggestion, I provide an
overview of the commitments of the classical view and distinguish three critiques
of the role that representations play in that view. I provide further exploration and
defense of Daniel Dennett’s distinction between explicit and tacit representations.
I argue that we should understand the embodied cognition approach using a
framework that includes tacit representations. Given this perspective, I will explore
some AI research areas that may be recommended by an embodied perspective on
cognition
Information Processing, Computation and Cognition
Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism/computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects
The Knowledge Level in Cognitive Architectures: Current Limitations and Possible Developments
In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge.
We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build articial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the final part of the paper further directions of research will be explored, trying to address current limitations and
future challenges
Towards a Quantum-Like Cognitive Architecture for Decision-Making
We propose an alternative and unifying framework for decision-making that, by
using quantum mechanics, provides more generalised cognitive and decision
models with the ability to represent more information than classical models.
This framework can accommodate and predict several cognitive biases reported in
Lieder & Griffiths without heavy reliance on heuristics nor on assumptions of
the computational resources of the mind
A Defence of Cartesian Materialism
One of the principal tasks Dennett sets himself in "Consciousness Explained" is to demolish the Cartesian theatre model of phenomenal consciousness, which in its contemporary garb takes the form of Cartesian materialism: the idea that conscious experience is a process of presentation realized in the physical materials of the brain. The now standard response to Dennett is that, in focusing on Cartesian materialism, he attacks an impossibly naive account of consciousness held by no one currently working in cognitive science or the philosophy of mind. Our response is quite different. We believe that, once properly formulated, Cartesian materialism is no straw man. Rather, it is an attractive hypothesis about the relationship between the computational architecture of the brain and phenomenal consciousness, and hence one that is worthy of further exploration. Consequently, our primary aim in this paper is to defend Cartesian materialism from Dennett's assault. We do this by showing that Dennett's argument against this position is founded on an implicit assumption (about the relationship between phenomenal experience and information coding in the brain), which while valid in the context of classical cognitive science, is not forced on connectionism
A Comparison of Different Cognitive Paradigms Using Simple Animats in a Virtual Laboratory, with Implications to the Notion of Cognition
In this thesis I present a virtual laboratory which implements five different models for controlling animats: a rule-based system, a behaviour-based system, a concept-based system, a neural network, and a Braitenberg architecture. Through different experiments, I compare the performance of the models and conclude that there is no best model, since different models are better for different things in different contexts. The models I chose, although quite simple, represent different approaches for studying cognition. Using the results as an empirical philosophical aid, I note that there is no best approach for studying cognition, since different approaches have all advantages and disadvantages, because they study different aspects of cognition from different contexts. This has implications for current debates on proper approaches for cognition: all approaches are a bit proper, but none will be proper enough. I draw remarks on the notion of cognition abstracting from all the approaches used to study it, and propose a simple classification for different types of cognition
A Connectionist Theory of Phenomenal Experience
When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as
many of them have been doing recently, there are two fundamentally distinct approaches available. Either
consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or
it is to be explained in terms of the computational processes defined over these vehicles. We call versions of
these two approaches vehicle and process theories of consciousness, respectively. However, while there may
be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because
of the influence exerted, on the one hand, by a large body of research which purports to show that the
explicit representation of information in the brain and conscious experience are dissociable, and on the
other, by the classical computational theory of mind – the theory that takes human cognition to be a species
of symbol manipulation. But two recent developments in cognitive science combine to suggest that a
reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the
experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer
reasonable to assume that the dissociability of conscious experience and explicit representation has been
adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in
cognitive science as it once was. It now has a lively competitor in the form of connectionism; and
connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory
of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It
takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit
representation of information in neurally realized PDP networks. This hypothesis leads us to re-assess some
common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways
Self-directedness, integration and higher cognition
In this paper I discuss connections between self-directedness, integration and higher cognition. I present a model of self-directedness as a basis for approaching higher cognition from a situated cognition perspective. According to this model increases in sensorimotor complexity create pressure for integrative higher order control and learning processes for acquiring information about the context in which action occurs. This generates complex articulated abstractive information processing, which forms the major basis for higher cognition. I present evidence that indicates that the same integrative characteristics found in lower cognitive process such as motor adaptation are present in a range of higher cognitive process, including conceptual learning. This account helps explain situated cognition phenomena in humans because the integrative processes by which the brain adapts to control interaction are relatively agnostic concerning the source of the structure participating in the process. Thus, from the perspective of the motor control system using a tool is not fundamentally different to simply controlling an arm
Interactivist approach to representation in epigenetic agents
Interactivism is a vast and rather ambitious philosophical
and theoretical system originally developed by Mark
Bickhard, which covers plethora of aspects related to
mind and person. Within interactivism, an agent is
regarded as an action system: an autonomous, self-organizing,
self-maintaining entity, which can exercise
actions and sense their effects in the environment it
inhabits. In this paper, we will argue that it is especially
suited for treatment of the problem of representation in
epigenetic agents. More precisely, we will elaborate on
process-based ontology for representations, and will
sketch a way of discussing about architectures for
epigenetic agents in a general manner
- …