2,045 research outputs found

    A Connectionist Theory of Phenomenal Experience

    Get PDF
    When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches vehicle and process theories of consciousness, respectively. However, while there may be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are dissociable, and on the other, by the classical computational theory of mind – the theory that takes human cognition to be a species of symbol manipulation. But two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of connectionism; and connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit representation of information in neurally realized PDP networks. This hypothesis leads us to re-assess some common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways

    Connectionism, Analogicity and Mental Content

    Get PDF
    In Connectionism and the Philosophy of Psychology, Horgan and Tienson (1996) argue that cognitive processes, pace classicism, are not governed by exceptionless, “representation-level” rules; they are instead the work of defeasible cognitive tendencies subserved by the non-linear dynamics of the brain’s neural networks. Many theorists are sympathetic with the dynamical characterisation of connectionism and the general (re)conception of cognition that it affords. But in all the excitement surrounding the connectionist revolution in cognitive science, it has largely gone unnoticed that connectionism adds to the traditional focus on computational processes, a new focus – one on the vehicles of mental representation, on the entities that carry content through the mind. Indeed, if Horgan and Tienson’s dynamical characterisation of connectionism is on the right track, then so intimate is the relationship between computational processes and representational vehicles, that connectionist cognitive science is committed to a resemblance theory of mental content

    Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

    Full text link
    An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.Comment: See http://www.jair.org/ for any accompanying file

    Platonic model of mind as an approximation to neurodynamics

    Get PDF
    Hierarchy of approximations involved in simplification of microscopic theories, from sub-cellural to the whole brain level, is presented. A new approximation to neural dynamics is described, leading to a Platonic-like model of mind based on psychological spaces. Objects and events in these spaces correspond to quasi-stable states of brain dynamics and may be interpreted from psychological point of view. Platonic model bridges the gap between neurosciences and psychological sciences. Static and dynamic versions of this model are outlined and Feature Space Mapping, a neurofuzzy realization of the static version of Platonic model, described. Categorization experiments with human subjects are analyzed from the neurodynamical and Platonic model points of view

    Tensor Products and Split-Level Architecture: Foundational Issues in the Classicism-Connectionism Debate

    Get PDF
    This paper responds to criticisms levelled by Fodor, Pylyshyn and McLaughlin against connectionism. Specifically, I will rebut the charge that connectionists cannot account for representational systematicity without implementing a classical architecture. This will be accomplished by drawing on Paul Smolensky\u27s Tensor Product model of representation and on his insights about split-level architectures

    Classical and Connectionist Models: Levels of Description

    Get PDF
    To begin, I introduce an analysis of interlevel relations that allows us to offer an initial characterization of the debate about the way classical and connectionist models relate. Subsequently, I examine a compatibility thesis and a conditional claim on this issue. With respect to the compatibility thesis, I argue that, even if classical and connectionist models are not necessarily incompatible, the emergence of the latter seems to undermine the best arguments for the Language of Thought Hypothesis, which is essential to the former. I attack the conditional claim of connectionism to eliminativism, presented by Ramsey et al. (1990), by discrediting their discrete characterization of common-sense psychological explanations and pointing to the presence of a moderate holistic constraint. Finally, I conclude that neither of the arguments considered excludes the possibility of viewing connectionist models as forming a part of a representational theory of cognition that dispenses with the Language of Thought Hypothesis

    Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience

    Full text link
    Physical symbol systems are needed for open-ended cognition. A good way to understand physical symbol systems is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality. The state of the art in cognitive architectures for open-ended cognition is critically assessed. I conclude that a cognitive architecture that evolves symbol structures in the brain is a promising candidate to explain open-ended cognition. Part 2 of the paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living Machines 2013 Natural History Museum, Londo
    • …
    corecore