68,341 research outputs found

    Conference on Artificial Intelligence: Question-Answering Systems

    Get PDF
    The last decade has produced several profound and exciting results in computer science theory and application. Some of these results have prepared the ground for disciplines now recognized as significant branches of computer based science: the theory of formal grammars and automatic compiler construction, information retrieval and data base management, the theory of communication and computer networks, and problem solving and artificial intelligence are examples of new computer sciences. In the area of artificial intelligence (AI), theoretical and applied research related to knowledge representation in computers, natural language analysis, deductive inference and automatic learning represent the most interesting topics and promise to become the basis for a new style of computer use. The general idea of this style consists in allowing the user to tell the computer "what to do" instead of "how to do". The computer system in this case behaves as an intelligent adviser and interpreter of predefined rules of the game in any particular problem area. Its advantages over human advisers and interpreters are based on the ability to store and handle gigantic amounts of structured data of which the end user can have only a vague idea. This approach becomes particularly attractive in different areas of applied systems analysis where computer programmed mathematical models give additional analytical power to an "intelligent" computer system. The challenging and promising features of AI research resulted in the organization by IIASA of an international Conference on Artificial Intelligence and Question-Answering Systems in June 1975. This Conference was held in accordance with the long range research strategy of the Computer Science Project and attracted 27 computer specialists from 12 National Member Organizations. Two basic points were discussed: scientific problems and basic results in the development of question-answering systems with natural language input and inference capability, and possible IIASA efforts in establishing an intelligent question-answering system with a data base for IIASA's applied projects. This publication contains papers devoted mostly to the first point. The particular subjects that were covered include natural language analysis, knowledge representation and deductive inference mechanisms. An important practical consequence of the Conference was a proposal from the Conference Working Group to IIASA for the implementation of a question-answering system for data base management at IIASA. Apart from the obvious scientific results, the meeting also helped to establish contacts between the NMO's involved in AI research. Participants agreed on future cooperation among their institutions in various AI areas

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    Translating Neuralese

    Full text link
    Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents' messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.Comment: Fixes typos and cleans ups some model presentation detail

    Reasoning About Pragmatics with Neural Listeners and Speakers

    Full text link
    We present a model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics. Like previous learned approaches to language generation, our model uses a simple feature-driven architecture (here a pair of neural "listener" and "speaker" models) to ground language in the world. Like inference-driven approaches to pragmatics, our model actively reasons about listener behavior when selecting utterances. For training, our approach requires only ordinary captions, annotated _without_ demonstration of the pragmatic behavior the model ultimately exhibits. In human evaluations on a referring expression game, our approach succeeds 81% of the time, compared to a 69% success rate using existing techniques

    Visual Entailment: A Novel Task for Fine-Grained Image Understanding

    Get PDF
    Existing visual reasoning datasets such as Visual Question Answering (VQA), often suffer from biases conditioned on the question, image or answer distributions. The recently proposed CLEVR dataset addresses these limitations and requires fine-grained reasoning but the dataset is synthetic and consists of similar objects and sentence structures across the dataset. In this paper, we introduce a new inference task, Visual Entailment (VE) - consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal of a trained VE model is to predict whether the image semantically entails the text. To realize this task, we build a dataset SNLI-VE based on the Stanford Natural Language Inference corpus and Flickr30k dataset. We evaluate various existing VQA baselines and build a model called Explainable Visual Entailment (EVE) system to address the VE task. EVE achieves up to 71% accuracy and outperforms several other state-of-the-art VQA based models. Finally, we demonstrate the explainability of EVE through cross-modal attention visualizations. The SNLI-VE dataset is publicly available at https://github.com/ necla-ml/SNLI-VE
    • …
    corecore