1,509 research outputs found

    A Defense of Pure Connectionism

    Full text link
    Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production. Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications

    Reinforcing connectionism: learning the statistical way

    Get PDF
    Connectionism's main contribution to cognitive science will prove to be the renewed impetus it has imparted to learning. Learning can be integrated into the existing theoretical foundations of the subject, and the combination, statistical computational theories, provide a framework within which many connectionist mathematical mechanisms naturally fit. Examples from supervised and reinforcement learning demonstrate this. Statistical computational theories already exist for certainn associative matrix memories. This work is extended, allowing real valued synapses and arbitrarily biased inputs. It shows that a covariance learning rule optimises the signal/noise ratio, a measure of the potential quality of the memory, and quantifies the performance penalty incurred by other rules. In particular two that have been suggested as occuring naturally are shown to be asymptotically optimal in the limit of sparse coding. The mathematical model is justified in comparison with other treatments whose results differ. Reinforcement comparison is a way of hastening the learning of reinforcement learning systems in statistical environments. Previous theoretical analysis has not distinguished between different comparison terms, even though empirically, a covariance rule has been shown to be better than just a constant one. The workings of reinforcement comparison are investigated by a second order analysis of the expected statistical performance of learning, and an alternative rule is proposed and empirically justified. The existing proof that temporal difference prediction learning converges in the mean is extended from a special case involving adjacent time steps to the general case involving arbitary ones. The interaction between the statistical mechanism of temporal difference and the linear representation is particularly stark. The performance of the method given a linearly dependent representation is also analysed

    What Is Cognitive Psychology?

    Get PDF
    What Is Cognitive Psychology? identifies the theoretical foundations of cognitive psychology—foundations which have received very little attention in modern textbooks. Beginning with the basics of information processing, Michael R. W. Dawson explores what experimental psychologists infer about these processes and considers what scientific explanations are required when we assume cognition is rule-governed symbol manipulation. From these foundations, psychologists can identify the architecture of cognition and better understand its role in debates about its true nature. This volume offers a deeper understanding of cognitive psychology and presents ideas for integrating traditional cognitive psychology with more modern fields like cognitive neuroscience.Publishe

    Abductive hypotheses generation in neural-symbolic systems

    Get PDF
    Wydział Nauk SpołecznychCelem pracy było opracowanie procedury abdukcyjnej, która działa w oparciu o system neuronalno-symboliczny. Procedura abdukcyjna jest rozumiana jako rozszerzona interpretacja algorytmiczna rozumowania abdukcyjnego, tj. posiadając wiedzę Γ oraz zjawisko ᵠ, którego nie można wyprowadzić z Γ tworzymy nową bazę wiedzy Γ’, która jest zmodyfikowaną wersją Γ, a z której ᵠ jest osiągalne. Hipotezę abdukcyjną definiujemy jako różnicę symetryczną pomiędzy Γ i Γ’. Wymagamy aby powstałe w ten sposób hipotezy abdukcyjne posiadały pewne własności, np. niesprzeczność z bazą wiedzy. Procedura abdukcyjna zaimplementowana została w systemie neuronalno-symbolicznym, który umożliwia tłumaczenie programów logicznych (baza wiedzy Γ oraz cel abdukcyjny ᵠ) na sztuczne sieci neuronowe, uczenie sieci neuronowych przy pomocy algorytmu propagacji wstecznej (proces tworzenia hipotezy abdukcyjnej) oraz tłumaczenie sieci neuronowych na programy logiczne (baza wiedzy Γ’), co umożliwia otrzymanie hipotezy abdukcyjnej.The goal of this work was to create abductive procedure that is implemented in neural-symbolic system. The abductive procedure is understood as extended algorithmic perspective, i.e. having Γ as knowledge base and ᵠ that is not obtainable from Γ we create Γ’ (modified Γ) from which ᵠ is obtainable. The abductive hypothesis is a symmetric difference between Γ and Γ’. We also require abductive hypotheses to fulfill certain conditions. The procedure is implemented in a neural-symbolic system where Γ and ᵠ are represented as a logic program that is translated into a neural network, the neural network is then trained by means of the backpropagation algorithm in such a way that ᵠ becomes obtainable, and then the trained neural network is translated back into a logic program that represents Γ’. The symmetric difference between Γ and Γ’ is the abductive hypothesis

    PTHOMAS: An adaptive information retrieval system on the connection machine.

    Get PDF
    This paper reports the state of development of PThomas, a network based document retrieval system implemented on a massively parallel fine-grained computer, the Connection Machine. The program is written in C*, an enhancement of the C programming language which exploits the parallelism of the Connection Machine. The system is based on Oddy’s original Thomas program, which was highly parallel in concept, and makes use of the Connection Machine’s single instruction multiple data (SIMD) processing capabilities. After an introduction to systems like Thomas, and their relationship to spreading activation and neural network models, the current state of PThomas is described, including details about the network representation and the parallel operations that are executed during a typical PThomas session

    Reasoning in non-probabilistic uncertainty: logic programming and neural-symbolic computing as examples

    Get PDF
    This article aims to achieve two goals: to show that probability is not the only way of dealing with uncertainty (and even more, that there are kinds of uncertainty which are for principled reasons not addressable with probabilistic means); and to provide evidence that logic-based methods can well support reasoning with uncertainty. For the latter claim, two paradigmatic examples are presented: Logic Programming with Kleene semantics for modelling reasoning from information in a discourse, to an interpretation of the state of affairs of the intended model, and a neural-symbolic implementation of Input/Output logic for dealing with uncertainty in dynamic normative context
    • …
    corecore