1,237 research outputs found
Tensor Products and Split-Level Architecture: Foundational Issues in the Classicism-Connectionism Debate
This paper responds to criticisms levelled by Fodor, Pylyshyn and McLaughlin against connectionism. Specifically, I will rebut the charge that connectionists cannot account for representational systematicity without implementing a classical architecture. This will be accomplished by drawing on Paul Smolensky\u27s Tensor Product model of representation and on his insights about split-level architectures
A Connectionist Theory of Phenomenal Experience
When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as
many of them have been doing recently, there are two fundamentally distinct approaches available. Either
consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or
it is to be explained in terms of the computational processes defined over these vehicles. We call versions of
these two approaches vehicle and process theories of consciousness, respectively. However, while there may
be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because
of the influence exerted, on the one hand, by a large body of research which purports to show that the
explicit representation of information in the brain and conscious experience are dissociable, and on the
other, by the classical computational theory of mind – the theory that takes human cognition to be a species
of symbol manipulation. But two recent developments in cognitive science combine to suggest that a
reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the
experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer
reasonable to assume that the dissociability of conscious experience and explicit representation has been
adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in
cognitive science as it once was. It now has a lively competitor in the form of connectionism; and
connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory
of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It
takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit
representation of information in neurally realized PDP networks. This hypothesis leads us to re-assess some
common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways
The Mode of Computing
The Turing Machine is the paradigmatic case of computing machines, but there
are others, such as Artificial Neural Networks, Table Computing,
Relational-Indeterminate Computing and diverse forms of analogical computing,
each of which based on a particular underlying intuition of the phenomenon of
computing. This variety can be captured in terms of system levels,
re-interpreting and generalizing Newell's hierarchy, which includes the
knowledge level at the top and the symbol level immediately below it. In this
re-interpretation the knowledge level consists of human knowledge and the
symbol level is generalized into a new level that here is called The Mode of
Computing. Natural computing performed by the brains of humans and non-human
animals with a developed enough neural system should be understood in terms of
a hierarchy of system levels too. By analogy from standard computing machinery
there must be a system level above the neural circuitry levels and directly
below the knowledge level that is named here The mode of Natural Computing. A
central question for Cognition is the characterization of this mode. The Mode
of Computing provides a novel perspective on the phenomena of computing,
interpreting, the representational and non-representational views of cognition,
and consciousness.Comment: 35 pages, 8 figure
A Defence of Cartesian Materialism
One of the principal tasks Dennett sets himself in "Consciousness Explained" is to demolish the Cartesian theatre model of phenomenal consciousness, which in its contemporary garb takes the form of Cartesian materialism: the idea that conscious experience is a process of presentation realized in the physical materials of the brain. The now standard response to Dennett is that, in focusing on Cartesian materialism, he attacks an impossibly naive account of consciousness held by no one currently working in cognitive science or the philosophy of mind. Our response is quite different. We believe that, once properly formulated, Cartesian materialism is no straw man. Rather, it is an attractive hypothesis about the relationship between the computational architecture of the brain and phenomenal consciousness, and hence one that is worthy of further exploration. Consequently, our primary aim in this paper is to defend Cartesian materialism from Dennett's assault. We do this by showing that Dennett's argument against this position is founded on an implicit assumption (about the relationship between phenomenal experience and information coding in the brain), which while valid in the context of classical cognitive science, is not forced on connectionism
Does Connectionism undermine Fodor’s Language of Thought Hypothesis?
In 1975, Fodor hypothesised that thought is structured in much the same way as language. 1 Thoughts have semantics, a combinatorial syntax, and store information symbolically. In the 1980s, Connectionism looked to undermine his view. It suggested that mental information is stored non-symbolically in neural nets; it was considered a “paradigm shift” for cognitive theories. 2 In the 1990s, further work by Chalmers and Rowlands undermined Fodor’s Language of Thought Hypothesis. 345 Modern cognitive research into Deep Learning uses an inherently Connectionist framework.
This paper separates Fodor’s hypothesis from his arguments in its support. It argues that Fodor’s Language of Thought Hypothesis is still a legitimate theory of cognition. However, it accepts that Fodor’s arguments in favour of his hypothesis are fallacious. The paper examines three of Fodor’s arguments for a language of thought: the only game in town argument, the argument from systematicity and productivity, and the argument from isomorphism. 6789 It shows each to be flawed.
Further, this paper dismisses the dilemma Fodor and Pylyshyn present the Connectionist: that they must either merely implement his Language of Thought Hypothesis or concede that it is an inadequate theory of cognition. 10 he paper uses Chalmers’ Backpropagation Model, a system that encodes grammatical information without using symbols, to escape the dilemma. 11
Throughout, I argue that despite successfully undermining his arguments, Connectionism does not undermine Fodor’s Language of Thought Hypothesis. I provide two positive reasons to upholding the Language of Thought Hypothesis. This paper concludes that – at present – neither Connectionism nor Fodor’s Language of Thought Hypothesis has undermined the other
Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience
Physical symbol systems are needed for open-ended cognition. A good way to
understand physical symbol systems is by comparison of thought to chemistry.
Both have systematicity, productivity and compositionality. The state of the
art in cognitive architectures for open-ended cognition is critically assessed.
I conclude that a cognitive architecture that evolves symbol structures in the
brain is a promising candidate to explain open-ended cognition. Part 2 of the
paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living
Machines 2013 Natural History Museum, Londo
Introduction
Jerry Fodor, by common agreement, is one of the world’s leading philosophers. At the
forefront of the cognitive revolution since the 1960s, his work has determined much of
the research agenda in the philosophy of mind and the philosophy of psychology for
well over 40 years. This special issue dedicated to his work is intended both as a tribute
to Fodor and as a contribution to the fruitful debates that his work has generated.
One philosophical thesis that has dominated Fodor’s work since the 1960s is realism
about the mental. Are there really mental states, events and processes? From his
first book, Psychological Explanation (1968), onwards, Fodor has always answered
this question with a resolute yes. From his early rejection of Wittgensteinian and
behaviourist conceptions of the mind, to his later disputes with philosophers of mind
of the elminativist ilk, he has always been opposed to views that try to explain
away mental phenomena. On his view, there are minds, and minds can change the
world
What's Connectionism got to do with IT?!
In this paper, I shall prove, why the old fashioned Artificial Intelligence cannot be the right answer.Keywords: GOFAI, Fodor, Pylyshyn, Connectionism, Turing machines, Turing-undecidability, Universal Turing machines
- …