91 research outputs found

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    Application of Analogical Reasoning for Use in Visual Knowledge Extraction

    Get PDF
    There is a continual push to make Artificial Intelligence (AI) as human-like as possible; however, this is a difficult task because of its inability to learn beyond its current comprehension. Analogical reasoning (AR) has been proposed as one method to achieve this goal. Current literature lacks a technical comparison on psychologically-inspired and natural-language-processing-produced AR algorithms with consistent metrics on multiple-choice word-based analogy problems. Assessment is based on “correctness” and “goodness” metrics. There is not a one-size-fits-all algorithm for all textual problems. As contribution in visual AR, a convolutional neural network (CNN) is integrated with the AR vector space model, Global Vectors (GloVe), in the proposed, Image Recognition Through Analogical Reasoning Algorithm (IRTARA). Given images outside of the CNN’s training data, IRTARA produces contextual information by leveraging semantic information from GloVe. IRTARA’s quality of results is measured by definition, AR, and human factors evaluation methods, which saw consistency at the extreme ends. The research shows the potential for AR to facilitate more a human-like AI through its ability to understand concepts beyond its foundational knowledge in both a textual and visual problem space

    Reinforcing connectionism: learning the statistical way

    Get PDF
    Connectionism's main contribution to cognitive science will prove to be the renewed impetus it has imparted to learning. Learning can be integrated into the existing theoretical foundations of the subject, and the combination, statistical computational theories, provide a framework within which many connectionist mathematical mechanisms naturally fit. Examples from supervised and reinforcement learning demonstrate this. Statistical computational theories already exist for certainn associative matrix memories. This work is extended, allowing real valued synapses and arbitrarily biased inputs. It shows that a covariance learning rule optimises the signal/noise ratio, a measure of the potential quality of the memory, and quantifies the performance penalty incurred by other rules. In particular two that have been suggested as occuring naturally are shown to be asymptotically optimal in the limit of sparse coding. The mathematical model is justified in comparison with other treatments whose results differ. Reinforcement comparison is a way of hastening the learning of reinforcement learning systems in statistical environments. Previous theoretical analysis has not distinguished between different comparison terms, even though empirically, a covariance rule has been shown to be better than just a constant one. The workings of reinforcement comparison are investigated by a second order analysis of the expected statistical performance of learning, and an alternative rule is proposed and empirically justified. The existing proof that temporal difference prediction learning converges in the mean is extended from a special case involving adjacent time steps to the general case involving arbitary ones. The interaction between the statistical mechanism of temporal difference and the linear representation is particularly stark. The performance of the method given a linearly dependent representation is also analysed

    Models and evaluation of human-machine systems

    Get PDF
    "September 1993.""Prepared for: International Atomic Energy Association [sic], Wagramerstrasse 5, P. 0. Box 100 A-1400 Vienna, Austria."Part of appendix A and bibliography missingIncludes bibliographical referencesThe field of human-machine systems and human-machine interfaces is very multidisciplinary. We have to navigate between the knowledge waves brought by several areas of the human learning: cognitive psychology, artificial intelligence, philosophy, linguistics, ergonomy, control systems engineering, neurophysiology, sociology, computer sciences, among others. At the present moment, all these disciplines seek to be close each other to generate synergy. It is necessary to homogenize the different nomenclatures and to make that each one can benefit from the results and advances found in the other. Accidents like TMI, Chernobyl, Challenger, Bhopal, and others demonstrated that the human beings shall deal with complex systems that are created by the technological evolution more carefully. The great American writer Allan Bloom died recently wrote in his book 'The Closing of the American Mind' (1987) about the universities curriculum that are commonly separated in tight departments. This was a necessity of the industrial revolution that put emphasis in practical courses in order to graduate specialists in many fields. However, due the great complexity of our technological world, we feel the necessity to integrate again those disciplines that one day were separated to make possible their fast development. This Report is a modest trial to do this integration in a holistic way, trying to capture the best tendencies in those areas of the human learning mentioned in the first lines above. I expect that it can be useful to those professionals who, like me, would desire to build better human-machine systems in order to avoid those accidents also mentioned above

    Research in the Language, Information and Computation Laboratory of the University of Pennsylvania

    Get PDF
    This report takes its name from the Computational Linguistics Feedback Forum (CLiFF), an informal discussion group for students and faculty. However the scope of the research covered in this report is broader than the title might suggest; this is the yearly report of the LINC Lab, the Language, Information and Computation Laboratory of the University of Pennsylvania. It may at first be hard to see the threads that bind together the work presented here, work by faculty, graduate students and postdocs in the Computer Science and Linguistics Departments, and the Institute for Research in Cognitive Science. It includes prototypical Natural Language fields such as: Combinatorial Categorial Grammars, Tree Adjoining Grammars, syntactic parsing and the syntax-semantics interface; but it extends to statistical methods, plan inference, instruction understanding, intonation, causal reasoning, free word order languages, geometric reasoning, medical informatics, connectionism, and language acquisition. Naturally, this introduction cannot spell out all the connections between these abstracts; we invite you to explore them on your own. In fact, with this issue it’s easier than ever to do so: this document is accessible on the “information superhighway”. Just call up http://www.cis.upenn.edu/~cliff-group/94/cliffnotes.html In addition, you can find many of the papers referenced in the CLiFF Notes on the net. Most can be obtained by following links from the authors’ abstracts in the web version of this report. The abstracts describe the researchers’ many areas of investigation, explain their shared concerns, and present some interesting work in Cognitive Science. We hope its new online format makes the CLiFF Notes a more useful and interesting guide to Computational Linguistics activity at Penn

    Decision Support Systems

    Get PDF
    Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference

    Neuere Entwicklungen der deklarativen KI-Programmierung : proceedings

    Get PDF
    The field of declarative AI programming is briefly characterized. Its recent developments in Germany are reflected by a workshop as part of the scientific congress KI-93 at the Berlin Humboldt University. Three tutorials introduce to the state of the art in deductive databases, the programming language Gödel, and the evolution of knowledge bases. Eleven contributed papers treat knowledge revision/program transformation, types, constraints, and type-constraint combinations
    • 

    corecore