261,730 research outputs found

    LangPro: Natural Language Theorem Prover

    Get PDF
    LangPro is an automated theorem prover for natural language (https://github.com/kovvalsky/LangPro). Given a set of premises and a hypothesis, it is able to prove semantic relations between them. The prover is based on a version of analytic tableau method specially designed for natural logic. The proof procedure operates on logical forms that preserve linguistic expressions to a large extent. %This property makes the logical forms easily obtainable from syntactic trees. %, in particular, Combinatory Categorial Grammar derivation trees. The nature of proofs is deductive and transparent. On the FraCaS and SICK textual entailment datasets, the prover achieves high results comparable to state-of-the-art.Comment: 6 pages, 8 figures, Conference on Empirical Methods in Natural Language Processing (EMNLP) 201

    Analysis criteria of logic and linguistic models of natural language sentences

    Get PDF
    Для здійснення змістовного аналізу електронних текстових документів запропоновано використовувати формальні логіко-лінгвістичні моделі. Метою статті є опис критеріїв аналізу формальних моделей, що здатні відображати зміст речень природної мови та формуються з використанням математичного апарату логіки предикатів. Описані критерії аналізу логіко-лінгвістичних моделей необхідні для побудови формальних моделей електронних текстових документів.The article describes the main text models used today as a tool for content processing electronic text documents. To make a content analysis author proposes to use formal logic and linguistic models, which are based on functional relationships between the principal and subordinate parts of natural language sentences. The article is to describe the criteria for analysis of formal models that can reflect the content of natural language sentences and which are formed using mathematical tools of predicate logic. For this purpose, the study researches principles of construction of logic and linguistic models of natural language sentences and formulates four criteria of analysis. First criterion analyzes the number of simple predicates in logic and linguistic model that helps to identify information about the type and composition of natural language sentences. The second criterion analyzes potency of set of predicate variables and constants of logic and linguistic model, which affects the number of simple predicates and identifies the type of individual forms of logic and linguistic model. The third criterion focuses on the analysis of logical operations that used in logic and linguistic model. That makes it possible to analyze the sequence of considerations referred to the natural language sentence. The forth criterion examines the presence of identical components in logic and linguistic models of natural language sentences from different sets of predicate variables and constants. Described analysis criteria of logic and linguistic models required to build formal models of electronic text documents using the mathematical apparatus of predicate logic

    Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering

    Full text link
    Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.Comment: 9 pages, 3 figures, AAAI 201

    Converting Natural Language Phrases in Lambda Calculus to Generalized Constraint Language

    Get PDF
    This study explores one aspect of bridging Computing with Words with Natural Language Processing, to connect the extraction capabilities of Natural Language Processing with the inference capabilities of Computing with Words. Computing with Words uses Generalized Constraint Language to show the logical proposition of a given expression. A program was written to convert a logic-based lambda calculus representation of any English natural language expression into Generalized Constraint Language. The scope of this project is set to tagging parts of speech in simplistic expressions and is a foundation for expanding upon more complex lambda calculus expressions into Generalized Constraint Language. This program tags the parts of speech from the lambda calculus expression and outputs the Generalized Constraint Language of the expression, showing the constraint on an idea in the original sentence. This project establishes an entry point and is designed with further improvements and modifications in mind. The output from this project is useful in providing an understanding of bridging Natural Language Processing and Computing with Words, as the program creates a baseline of extracting parts of speech from a sentence to highlighting significant meaning of the given sentence.https://openriver.winona.edu/urc2019/1030/thumbnail.jp
    corecore