430 research outputs found
Invariant Synthesis for Incomplete Verification Engines
We propose a framework for synthesizing inductive invariants for incomplete
verification engines, which soundly reduce logical problems in undecidable
theories to decidable theories. Our framework is based on the counter-example
guided inductive synthesis principle (CEGIS) and allows verification engines to
communicate non-provability information to guide invariant synthesis. We show
precisely how the verification engine can compute such non-provability
information and how to build effective learning algorithms when invariants are
expressed as Boolean combinations of a fixed set of predicates. Moreover, we
evaluate our framework in two verification settings, one in which verification
engines need to handle quantified formulas and one in which verification
engines have to reason about heap properties expressed in an expressive but
undecidable separation logic. Our experiments show that our invariant synthesis
framework based on non-provability information can both effectively synthesize
inductive invariants and adequately strengthen contracts across a large suite
of programs
A Computational-Hermeneutic Approach for Conceptual Explicitation
We present a computer-supported approach for the logical analysis and
conceptual explicitation of argumentative discourse. Computational hermeneutics
harnesses recent progresses in automated reasoning for higher-order logics and
aims at formalizing natural-language argumentative discourse using flexible
combinations of expressive non-classical logics. In doing so, it allows us to
render explicit the tacit conceptualizations implicit in argumentative
discursive practices. Our approach operates on networks of structured arguments
and is iterative and two-layered. At one layer we search for logically correct
formalizations for each of the individual arguments. At the next layer we
select among those correct formalizations the ones which honor the argument's
dialectic role, i.e. attacking or supporting other arguments as intended. We
operate at these two layers in parallel and continuously rate sentences'
formalizations by using, primarily, inferential adequacy criteria. An
interpretive, logical theory will thus gradually evolve. This theory is
composed of meaning postulates serving as explications for concepts playing a
role in the analyzed arguments. Such a recursive, iterative approach to
interpretation does justice to the inherent circularity of understanding: the
whole is understood compositionally on the basis of its parts, while each part
is understood only in the context of the whole (hermeneutic circle). We
summarily discuss previous work on exemplary applications of human-in-the-loop
computational hermeneutics in metaphysical discourse. We also discuss some of
the main challenges involved in fully-automating our approach. By sketching
some design ideas and reviewing relevant technologies, we argue for the
technological feasibility of a highly-automated computational hermeneutics.Comment: 29 pages, 9 figures, to appear in A. Nepomuceno, L. Magnani, F.
Salguero, C. Bar\'es, M. Fontaine (eds.), Model-Based Reasoning in Science
and Technology. Inferential Models for Logic, Language, Cognition and
Computation, Series "Sapere", Springe
Representation of research hypotheses
BACKGROUND: Hypotheses are now being automatically produced on an industrial scale by computers in biology, e.g. the annotation of a genome is essentially a large set of hypotheses generated by sequence similarity programs; and robot scientists enable the full automation of a scientific investigation, including generation and testing of research hypotheses. RESULTS: This paper proposes a logically defined way for recording automatically generated hypotheses in machine amenable way. The proposed formalism allows the description of complete hypotheses sets as specified input and output for scientific investigations. The formalism supports the decomposition of research hypotheses into more specialised hypotheses if that is required by an application. Hypotheses are represented in an operational way â it is possible to design an experiment to test them. The explicit formal description of research hypotheses promotes the explicit formal description of the results and conclusions of an investigation. The paper also proposes a framework for automated hypotheses generation. We demonstrate how the key components of the proposed framework are implemented in the Robot Scientist âAdamâ. CONCLUSIONS: A formal representation of automatically generated research hypotheses can help to improve the way humans produce, record, and validate research hypotheses. AVAILABILITY: http://www.aber.ac.uk/en/cs/research/cb/projects/robotscientist/results
Learning Commonsense Knowledge Through Interactive Dialogue
One of the most difficult problems in Artificial Intelligence is related to acquiring commonsense knowledge - to create a collection of facts and information that an ordinary person should know. In this work, we present a system that, from a limited background knowledge, is able to learn to form simple concepts through interactive dialogue with a user. We approach the problem using a syntactic parser, along with a mechanism to check for synonymy, to translate sentences into logical formulas represented in Event Calculus using Answer Set Programming (ASP). Reasoning and learning tasks are then automatically generated for the translated text, with learning being initiated through question and answering. The system is capable of learning with no contextual knowledge prior to the dialogue. The system has been evaluated on stories inspired by the Facebook\u27s bAbI\u27s question-answering tasks, and through appropriate question and answering is able to respond accurately to these dialogues
- âŚ