29,459 research outputs found
Repairing Inconsistent Databases: A Model-Theoretic Approach and Abductive Reasoning
In this paper we consider two points of views to the problem of coherent
integration of distributed data. First we give a pure model-theoretic analysis
of the possible ways to `repair' a database. We do so by characterizing the
possibilities to `recover' consistent data from an inconsistent database in
terms of those models of the database that exhibit as minimal inconsistent
information as reasonably possible. Then we introduce an abductive application
to restore the consistency of a given database. This application is based on an
abductive solver (A-system) that implements an SLDNFA-resolution procedure, and
computes a list of data-facts that should be inserted to the database or
retracted from it in order to keep the database consistent. The two approaches
for coherent data integration are related by soundness and completeness
results.Comment: 15 pages. Originally published in proc. PCL 2002, a FLoC workshop;
eds. Hendrik Decker, Dina Goldin, Jorgen Villadsen, Toshiharu Waragai
(http://floc02.diku.dk/PCL/
A New Algorithm to Automate Inductive Learning of Default Theories
In inductive learning of a broad concept, an algorithm should be able to
distinguish concept examples from exceptions and noisy data. An approach
through recursively finding patterns in exceptions turns out to correspond to
the problem of learning default theories. Default logic is what humans employ
in common-sense reasoning. Therefore, learned default theories are better
understood by humans. In this paper, we present new algorithms to learn default
theories in the form of non-monotonic logic programs. Experiments reported in
this paper show that our algorithms are a significant improvement over
traditional approaches based on inductive logic programming.Comment: Paper presented at the 33rd International Conference on Logic
Programming (ICLP 2017), Melbourne, Australia, August 28 to September 1, 2017
16 pages, LaTeX, 3 PDF figures (arXiv:YYMM.NNNNN
Location-Based Reasoning about Complex Multi-Agent Behavior
Recent research has shown that surprisingly rich models of human activity can
be learned from GPS (positional) data. However, most effort to date has
concentrated on modeling single individuals or statistical properties of groups
of people. Moreover, prior work focused solely on modeling actual successful
executions (and not failed or attempted executions) of the activities of
interest. We, in contrast, take on the task of understanding human
interactions, attempted interactions, and intentions from noisy sensor data in
a fully relational multi-agent setting. We use a real-world game of capture the
flag to illustrate our approach in a well-defined domain that involves many
distinct cooperative and competitive joint activities. We model the domain
using Markov logic, a statistical-relational language, and learn a theory that
jointly denoises the data and infers occurrences of high-level activities, such
as a player capturing an enemy. Our unified model combines constraints imposed
by the geometry of the game area, the motion model of the players, and by the
rules and dynamics of the game in a probabilistically and logically sound
fashion. We show that while it may be impossible to directly detect a
multi-agent activity due to sensor noise or malfunction, the occurrence of the
activity can still be inferred by considering both its impact on the future
behaviors of the people involved as well as the events that could have preceded
it. Further, we show that given a model of successfully performed multi-agent
activities, along with a set of examples of failed attempts at the same
activities, our system automatically learns an augmented model that is capable
of recognizing success and failure, as well as goals of peoples actions with
high accuracy. We compare our approach with other alternatives and show that
our unified model, which takes into account not only relationships among
individual players, but also relationships among activities over the entire
length of a game, although more computationally costly, is significantly more
accurate. Finally, we demonstrate that explicitly modeling unsuccessful
attempts boosts performance on other important recognition tasks
All-Path Reachability Logic
This paper presents a language-independent proof system for reachability
properties of programs written in non-deterministic (e.g., concurrent)
languages, referred to as all-path reachability logic. It derives
partial-correctness properties with all-path semantics (a state satisfying a
given precondition reaches states satisfying a given postcondition on all
terminating execution paths). The proof system takes as axioms any
unconditional operational semantics, and is sound (partially correct) and
(relatively) complete, independent of the object language. The soundness has
also been mechanized in Coq. This approach is implemented in a tool for
semantics-based verification as part of the K framework (http://kframework.org
Unified Correspondence and Proof Theory for Strict Implication
The unified correspondence theory for distributive lattice expansion logics
(DLE-logics) is specialized to strict implication logics. As a consequence of a
general semantic consevativity result, a wide range of strict implication
logics can be conservatively extended to Lambek Calculi over the bounded
distributive full non-associative Lambek calculus (BDFNL). Many strict
implication sequents can be transformed into analytic rules employing one of
the main tools of unified correspondence theory, namely (a suitably modified
version of) the Ackermann lemma based algorithm \msf{ALBA}. Gentzen-style
cut-free sequent calculi for BDFNL and its extensions with analytic rules which
are transformed from strict implication sequents, are developed.Comment: This is a Pre-publication version of a submission to the Journal of
Logic and Computatio
MirrorShard: Proof by Computational Reflection with Verified Hints
We describe a method for building composable and extensible verification
procedures within the Coq proof assistant. Unlike traditional methods that rely
on run-time generation and checking of proofs, we use verified-correct
procedures with Coq soundness proofs. Though they are internalized in Coq's
logic, our provers support sound extension by users with hints over new
domains, enabling automated reasoning about user-defined abstract predicates.
We maintain soundness by developing an architecture for modular packaging,
construction, and composition of hint databases, which had previously only been
implemented in Coq at the level of its dynamically typed, proof-generating
tactic language. Our provers also include rich handling of unification
variables, enabling integration with other tactic-based deduction steps within
Coq. We have implemented our techniques in MirrorShard, an open-source
framework for reflective verification. We demonstrate its applicability by
instantiating it to separation logic in order to reason about imperative
program verification
Blocksworld Revisited: Learning and Reasoning to Generate Event-Sequences from Image Pairs
The process of identifying changes or transformations in a scene along with
the ability of reasoning about their causes and effects, is a key aspect of
intelligence. In this work we go beyond recent advances in computational
perception, and introduce a more challenging task, Image-based Event-Sequencing
(IES). In IES, the task is to predict a sequence of actions required to
rearrange objects from the configuration in an input source image to the one in
the target image. IES also requires systems to possess inductive
generalizability. Motivated from evidence in cognitive development, we compile
the first IES dataset, the Blocksworld Image Reasoning Dataset (BIRD) which
contains images of wooden blocks in different configurations, and the sequence
of moves to rearrange one configuration to the other. We first explore the use
of existing deep learning architectures and show that these end-to-end methods
under-perform in inferring temporal event-sequences and fail at inductive
generalization. We then propose a modular two-step approach: Visual Perception
followed by Event-Sequencing, and demonstrate improved performance by combining
learning and reasoning. Finally, by showing an extension of our approach on
natural images, we seek to pave the way for future research on event sequencing
for real world scenes.Comment: 10 pages, 5 figures, for associated dataset, see
https://asu-active-perception-group.github.io/bird_dataset_web
The Next 700 Challenge Problems for Reasoning with Higher-Order Abstract Syntax Representations: Part 1-A Common Infrastructure for Benchmarks
A variety of logical frameworks support the use of higher-order abstract
syntax (HOAS) in representing formal systems. Although these systems seem
superficially the same, they differ in a variety of ways; for example, how they
handle a context of assumptions and which theorems about a given formal system
can be concisely expressed and proved. Our contributions in this paper are
three-fold: 1) we develop a common infrastructure for representing benchmarks
for systems supporting reasoning with binders, 2) we present several concrete
benchmarks, which highlight a variety of different aspects of reasoning within
a context of assumptions, and 3) we design an open repository ORBI, (Open
challenge problem Repository for systems supporting reasoning with BInders).
Our work sets the stage for providing a basis for qualitative comparison of
different systems. This allows us to review and survey the state of the art,
which we do in great detail for four systems in Part 2 of this paper (Felty et
al, 2015). It also allows us to outline future fundamental research questions
regarding the design and implementation of meta-reasoning systems.Comment: 42 pages, 5 figure
Computing Stable Models of Normal Logic Programs Without Grounding
We present a method for computing stable models of normal logic programs,
i.e., logic programs extended with negation, in the presence of predicates with
arbitrary terms. Such programs need not have a finite grounding, so traditional
methods do not apply. Our method relies on the use of a non-Herbrand universe,
as well as coinduction, constructive negation and a number of other novel
techniques. Using our method, a normal logic program with predicates can be
executed directly under the stable model semantics without requiring it to be
grounded either before or during execution and without requiring that its
variables range over a finite domain. As a result, our method is quite general
and supports the use of terms as arguments, including lists and complex data
structures. A prototype implementation and non-trivial applications have been
developed to demonstrate the feasibility of our method
Induction of Non-Monotonic Rules From Statistical Learning Models Using High-Utility Itemset Mining
We present a fast and scalable algorithm to induce non-monotonic logic
programs from statistical learning models. We reduce the problem of search for
best clauses to instances of the High-Utility Itemset Mining (HUIM) problem. In
the HUIM problem, feature values and their importance are treated as
transactions and utilities respectively. We make use of TreeExplainer, a fast
and scalable implementation of the Explainable AI tool SHAP, to extract locally
important features and their weights from ensemble tree models. Our experiments
with UCI standard benchmarks suggest a significant improvement in terms of
classification evaluation metrics and running time of the training algorithm
compared to ALEPH, a state-of-the-art Inductive Logic Programming (ILP) system.Comment: arXiv admin note: text overlap with arXiv:1808.0062
- …