107 research outputs found

    ANNABELL, a cognitive system able to learn different languages

    Get PDF
    © 2018 The authors and IOS Press. All rights reserved. ANNABELL is a cognitive system entirely based on a large-scale neural architecture capable of learning to communicate through natural language starting from a tabula rasa condition. In order to shed light on the level of cognitive development required for language acquisition, in this work the model is used to study the acquisition of a new language, namely Albanian, in addition to English. The aim is to evaluate in a completely different and more complex language the ability of the model to acquire new information through several examples introduced in the new language and to process the acquired information, answering questions that require the use of different language patterns. The results show that the system is capable of learning cumulatively in either language and to develop a broad range of language processing functionalities in both languages

    Hebbian learning in recurrent neural networks for natural language processing

    Get PDF
    This research project examines Hebbian learning in recurrent neural networks for natural language processing and attempts to interpret language at the level of a two and one half year old child. In this project five neural networks were built to interpret natural language: a Simple Recurrent Network with Hebbian Learning, a Jordan network with Hebbian learning and one hidden layer, a Jordannetwork with Hebbian learning and no hidden layers, a Simple Recurrent Network back propagation learning, and a nonrecurrent neural network with backpropagation learning. It is known that Hebbian learning works well when the input vectors are orthogonal, but, as this project shows, it does not perform well in recurrent neural networks for natural language processing when the input vectors for the individual words are approximately orthogonal. This project shows that,given approximately orthogonal vectors to represent each word in the vocabulary the input vectors for a given command are not approximately orthogonal and the internal representations that the neural network builds are similar for different commands. As the data shows, the Hebbian learning neural networks were unable to perform the natural language interpretation task while the back propagation neural networks were much more successful. Therefore, Hebbian learning does not work well in recurrent neural networks for natural language processing even when the input vectors for the individual words are approximately orthogonal

    Connectionism, Chinese Rooms, and Intuition Pumps

    Get PDF
    John Searle's famous Chinese Room argument is perhaps the most well-known attack on computational views of mind. At the center of this argument is a thought experiment in which the reader (thinker) is lead to an intuition that computational models of mind are deeply flawed due to their syntactic (or formal) nature. In this paper, I argue that the resulting intuition of this thought experiment is dampened when the `Classical' program contained in the original thought experiment is replaced with a `Connectionist' program. The resulting thought experiment - The Korean Room - helps show that the intuitive results of Searle's `intuition pump' can change as a result of relatively small changes in what we're asked to imagine.Philosoph

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    Model of models -- Part 1

    Full text link
    This paper proposes a new cognitive model, acting as the main component of an AGI agent. The model is introduced in its mature intelligence state, and as an extension of previous models, DENN, and especially AKREM, by including operational models (frames/classes) and will. This model's core assumption is that cognition is about operating on accumulated knowledge, with the guidance of an appropriate will. Also, we assume that the actions, part of knowledge, are learning to be aligned with will, during the evolution phase that precedes the mature intelligence state. In addition, this model is mainly based on the duality principle in every known intelligent aspect, such as exhibiting both top-down and bottom-up model learning, generalization verse specialization, and more. Furthermore, a holistic approach is advocated for AGI designing, and cognition under constraints or efficiency is proposed, in the form of reusability and simplicity. Finally, reaching this mature state is described via a cognitive evolution from infancy to adulthood, utilizing a consolidation principle. The final product of this cognitive model is a dynamic operational memory of models and instances. Lastly, some examples and preliminary ideas for the evolution phase to reach the mature state are presented.Comment: arXiv admin note: text overlap with arXiv:2301.1355

    A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge

    Get PDF
    Scripts representing common sense knowledge about stereotyped sequences of events have been shown to be a valu-able resource for NLP applications. We present a hierarchical Bayesian model for unsupervised learning of script knowledge from crowdsourced descriptions of human activities. Events and constraints on event ordering are induced jointly in one unified framework. We use a statistical model over permutations which captures event ordering constraints in a more flexible way than previous approaches. In order to alleviate the sparsity problem caused by using relatively small datasets, we incorporate in our hierarchical model an informed prior on word distributions. The resulting model substantially outperforms a state-of-the-art method on the event ordering task.

    Learning to See Analogies: A Connectionist Exploration

    Get PDF
    The goal of this dissertation is to integrate learning and analogy-making. Although learning and analogy-making both have long histories as active areas of research in cognitive science, not enough attention has been given to the ways in which they may interact. To that end, this project focuses on developing a computer program, called Analogator, that learns to make analogies by seeing examples of many different analogy problems and their solutions. That is, it learns to make analogies by analogy. This approach stands in contrast to most existing computational models of analogy in which particular analogical mechanisms are assumed a priori to exist. Rather than assuming certain principles about analogy-making mechanisms, the goal of the Analogator project is to learn what it means to make an analogy. This unique notion is the focus of this dissertation

    Learning to See Analogies: A Connectionist Exploration

    Get PDF
    The goal of this dissertation is to integrate learning and analogy-making. Although learning and analogy-making both have long histories as active areas of research in cognitive science, not enough attention has been given to the ways in which they may interact. To that end, this project focuses on developing a computer program, called Analogator, that learns to make analogies by seeing examples of many different analogy problems and their solutions. That is, it learns to make analogies by analogy. This approach stands in contrast to most existing computational models of analogy in which particular analogical mechanisms are assumed a priori to exist. Rather than assuming certain principles about analogy-making mechanisms, the goal of the Analogator project is to learn what it means to make an analogy. This unique notion is the focus of this dissertation
    • …
    corecore