6,870 research outputs found

    Symbolic-Connectionist Representational Model for Optimizing Decision Making Behavior in Intelligent Systems

    Get PDF
    Modeling higher order cognitive processes like human decision making come in three representational approaches namely symbolic, connectionist and symbolic-connectionist. Many connectionist neural network models are evolved over the decades for optimizing decision making behaviors and their agents are also in place. There had been attempts to implement symbolic structures within connectionist architectures with distributed representations. Our work was aimed at proposing an enhanced connectionist approach of optimizing the decisions within the framework of a symbolic cognitive model. The action selection module of this framework is forefront in evolving intelligent agents through a variety of soft computing models. As a continous effort, a Connectionist Cognitive Model (CCN) had been evolved by bringing a traditional symbolic cognitive process model proposed by LIDA as an inspiration to a feed forward neural network model for optimizing decion making behaviours in intelligent agents. Significanct progress was observed while comparing its performance with other varients

    High level cognitive information processing in neural networks

    Get PDF
    Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field

    Bring ART into the ACT

    Full text link
    ACT is compared with a particular type of connectionist model that cannot handle symbols and use non-biological operations that cannot learn in real time. This focus continues an unfortunate trend of straw man "debates" in cognitive science. Adaptive Resonance Theory, or ART, neural models of cognition can handle both symbols and sub-symbolic representations, and meets the Newell criteria at least as well as these models.Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    A Survey of Brain Inspired Technologies for Engineering

    Full text link
    Cognitive engineering is a multi-disciplinary field and hence it is difficult to find a review article consolidating the leading developments in the field. The in-credible pace at which technology is advancing pushes the boundaries of what is achievable in cognitive engineering. There are also differing approaches to cognitive engineering brought about from the multi-disciplinary nature of the field and the vastness of possible applications. Thus research communities require more frequent reviews to keep up to date with the latest trends. In this paper we shall dis-cuss some of the approaches to cognitive engineering holistically to clarify the reasoning behind the different approaches and to highlight their strengths and weaknesses. We shall then show how developments from seemingly disjointed views could be integrated to achieve the same goal of creating cognitive machines. By reviewing the major contributions in the different fields and showing the potential for a combined approach, this work intends to assist the research community in devising more unified methods and techniques for developing cognitive machines

    Connectionist natural language parsing

    Get PDF
    The key developments of two decades of connectionist parsing are reviewed. Connectionist parsers are assessed according to their ability to learn to represent syntactic structures from examples automatically, without being presented with symbolic grammar rules. This review also considers the extent to which connectionist parsers offer computational models of human sentence processing and provide plausible accounts of psycholinguistic data. In considering these issues, special attention is paid to the level of realism, the nature of the modularity, and the type of processing that is to be found in a wide range of parsers

    Training neural networks to encode symbols enables combinatorial generalization

    Get PDF
    Combinatorial generalization - the ability to understand and produce novel combinations of already familiar elements - is considered to be a core capacity of the human mind and a major challenge to neural network models. A significant body of research suggests that conventional neural networks can't solve this problem unless they are endowed with mechanisms specifically engineered for the purpose of representing symbols. In this paper we introduce a novel way of representing symbolic structures in connectionist terms - the vectors approach to representing symbols (VARS), which allows training standard neural architectures to encode symbolic knowledge explicitly at their output layers. In two simulations, we show that neural networks not only can learn to produce VARS representations, but in doing so they achieve combinatorial generalization in their symbolic and non-symbolic output. This adds to other recent work that has shown improved combinatorial generalization under specific training conditions, and raises the question of whether specific mechanisms or training routines are needed to support symbolic processing
    • …
    corecore