2,124 research outputs found

    A Prolog application for reasoning on maths puzzles with diagrams

    Get PDF
    open5noDespite the indisputable progresses of artificial intelligence, some tasks that are rather easy for a human being are still challenging for a machine. An emblematic example is the resolution of mathematical puzzles with diagrams. Sub-symbolical approaches have proven successful in fields like image recognition and natural language processing, but the combination of these techniques into a multimodal approach towards the identification of the puzzle’s answer appears to be a matter of reasoning, more suitable for the application of a symbolic technique. In this work, we employ logic programming to perform spatial reasoning on the puzzle’s diagram and integrate the deriving knowledge into the solving process. Analysing the resolution strategies required by the puzzles of an international competition for humans, we draw the design principles of a Prolog reasoning library, which interacts with image processing software to formulate the puzzle’s constraints. The library integrates the knowledge from different sources, and relies on the Prolog inference engine to provide the answer. This work can be considered as a first step towards the ambitious goal of a machine autonomously solving a problem in a generic context starting from its textual-graphical presentation. An ability that can help potentially every human–machine interaction.openBuscaroli, Riccardo; Chesani, Federico; Giuliani, Giulia; Loreti, Daniela; Mello, PaolaBuscaroli, Riccardo; Chesani, Federico; Giuliani, Giulia; Loreti, Daniela; Mello, Paol

    Machine comprehension of text using combinatory categorial grammar and answer set programs

    Get PDF
    We present an automated method for generating Answer Set Programs from narratives written in English and demonstrate how such a representation can be used to answer questions about text. The proposed approach relies on a transparent interface between the syntax and semantics of natural language provided by Combinatory Categorial Grammars to translate text into Answer Set Programs, hence creating a knowledge base that, together with background knowledge, can be queried

    GLIMPSED:Improving natural language processing with gaze data

    Get PDF

    Large Language Models Are Human-Level Prompt Engineers

    Full text link
    By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and most effective prompts have been handcrafted by humans. Inspired by classical program synthesis and the human approach to prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic instruction generation and selection. In our method, we treat the instruction as the "program," optimized by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. To evaluate the quality of the selected instruction, we evaluate the zero-shot performance of another LLM following the selected instruction. Experiments on 24 NLP tasks show that our automatically generated instructions outperform the prior LLM baseline by a large margin and achieve better or comparable performance to the instructions generated by human annotators on 19/24 tasks. We conduct extensive qualitative and quantitative analyses to explore the performance of APE. We show that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts. Please check out our webpage at https://sites.google.com/view/automatic-prompt-engineer

    A Theme-Rewriting Approach for Generating Algebra Word Problems

    Full text link
    Texts present coherent stories that have a particular theme or overall setting, for example science fiction or western. In this paper, we present a text generation method called {\it rewriting} that edits existing human-authored narratives to change their theme without changing the underlying story. We apply the approach to math word problems, where it might help students stay more engaged by quickly transforming all of their homework assignments to the theme of their favorite movie without changing the math concepts that are being taught. Our rewriting method uses a two-stage decoding process, which proposes new words from the target theme and scores the resulting stories according to a number of factors defining aspects of syntactic, semantic, and thematic coherence. Experiments demonstrate that the final stories typically represent the new theme well while still testing the original math concepts, outperforming a number of baselines. We also release a new dataset of human-authored rewrites of math word problems in several themes.Comment: To appear EMNLP 201

    Language Models Can Teach Themselves to Program Better

    Full text link
    Recent Language Models (LMs) achieve breakthrough performance in code generation when trained on human-authored problems, even solving some competitive-programming problems. Self-play has proven useful in games such as Go, and thus it is natural to ask whether LMs can generate their own instructive programming problems to improve their performance. We show that it is possible for an LM to synthesize programming problems and solutions, which are filtered for correctness by a Python interpreter. The LM's performance is then seen to improve when it is fine-tuned on its own synthetic problems and verified solutions; thus the model 'improves itself' using the Python interpreter. Problems are specified formally as programming puzzles [Schuster et al., 2021], a code-based problem format where solutions can easily be verified for correctness by execution. In experiments on publicly-available LMs, test accuracy more than doubles. This work demonstrates the potential for code LMs, with an interpreter, to generate instructive problems and improve their own performance.Comment: 22 pages, 14 figure

    A Tool for Encoding Controlled Natural Language Specifications as ASP Rules.

    Get PDF
    Answer Set Programming (ASP) is a popular declarative programming language for solving hard combinatorial problems. Albeit ASP has been widely adopted in both academic and industrial contexts, it might be difficult for people who are not familiar with logic programming conventions to use it. In this paper, we propose a translation of English sentences expressed in a controlled natural language (CNL) form into ASP. In particular, we first provide a definition of the type of sentences allowed by our CNL and their translation as ASP rules, and then exemplify the usage of CNL for the specification of well-known combinatorial problems
    • …
    corecore