1,357 research outputs found
The pragmatic proof: hypermedia API composition and execution
Machine clients are increasingly making use of the Web to perform tasks. While Web services traditionally mimic remote procedure calling interfaces, a new generation of so-called hypermedia APIs works through hyperlinks and forms, in a way similar to how people browse the Web. This means that existing composition techniques, which determine a procedural plan upfront, are not sufficient to consume hypermedia APIs, which need to be navigated at runtime. Clients instead need a more dynamic plan that allows them to follow hyperlinks and use forms with a preset goal. Therefore, in this paper, we show how compositions of hypermedia APIs can be created by generic Semantic Web reasoners. This is achieved through the generation of a proof based on semantic descriptions of the APIs' functionality. To pragmatically verify the applicability of compositions, we introduce the notion of pre-execution and post-execution proofs. The runtime interaction between a client and a server is guided by proofs but driven by hypermedia, allowing the client to react to the application's actual state indicated by the server's response. We describe how to generate compositions from descriptions, discuss a computer-assisted process to generate descriptions, and verify reasoner performance on various composition tasks using a benchmark suite. The experimental results lead to the conclusion that proof-based consumption of hypermedia APIs is a feasible strategy at Web scale.Peer ReviewedPostprint (author's final draft
Recommended from our members
Planning accessible explanations for entailments in OWL ontologies
A useful enhancement of an NLG system for verbalising ontologies would be a module capable of explaining undesired entailments of the axioms encoded by the developer. This task raises interesting issues of content planning. One approach, useful as a baseline, is simply to list the subset of axioms relevant to inferring the entailment; however, in many cases it will still not be obvious, even to OWL experts, why the entailment follows. We suggest an approach in which further statements are added in order to construct a proof tree, with every step based on a relatively simple deduction rule of known difficulty; we also describe an empirical study through which the difficulty of these simple deduction patterns has been measured
Recommended from our members
Generating Natural Language Explanations For Entailments In Ontologies
Building an error-free and high-quality ontology in OWL (Web Ontology Language)---the latest standard ontology language endorsed by the World Wide Web Consortium---is not an easy task for domain experts, who usually have limited knowledge of OWL and logic. One sign of an erroneous ontology is the occurrence of undesired inferences (or entailments), often caused by interactions among (apparently innocuous) axioms within the ontology. This suggests the need for a tool that allows developers to inspect why such an entailment follows from the ontology in order to debug and repair it.
This thesis aims to address the above problem by advancing knowledge and techniques in generating explanations for entailments in OWL ontologies. We build on earlier work on identifying minimal subsets of the ontology from which an entailment can be drawn---known technically as justifications. Our main focus is on planning (at a logical level) an explanation that links a justification (premises) to its entailment (conclusion); we also consider how best to express the explanation in English. Among other innovations, we propose a method for assessing the understandability of explanations, so that the easiest can be selected from a set of alternatives.
Our findings make a theoretical contribution to Natural Language Generation and Knowledge Representation. They could also play a practical role in improving the explanation facilities in ontology development tools, considering especially the requirements of users who are not expert in OWL
Proof Explanation in the DR-DEVICE System
Trust is a vital feature for Semantic Web: If users (humans and agents) are to use and integrate system answers, they must trust them. Thus, systems should be able to explain their actions, sources, and beliefs, and this issue is the topic of the proof layer in the design of the Semantic Web. This paper presents the design and implementation of a system for proof explanation on the Semantic Web, based on defeasible reasoning. The basis of this work is the DR-DEVICE system that is extended to handle proofs. A critical aspect is the representation of proofs in an XML language, which is achieved by a RuleML language extension
The use of data-mining for the automatic formation of tactics
This paper discusses the usse of data-mining for the automatic formation of tactics. It was presented at the Workshop on Computer-Supported Mathematical Theory Development held at IJCAR in 2004. The aim of this project is to evaluate the applicability of data-mining techniques to the automatic formation of tactics from large corpuses of proofs. We data-mine information from large proof corpuses to find commonly occurring patterns. These patterns are then evolved into tactics using genetic programming techniques
Persuasive Explanation of Reasoning Inferences on Dietary Data
Explainable AI aims at building intelligent systems that are able to provide a clear, and human understandable, justification of their decisions. This holds for both rule-based and data-driven methods. In management of chronic diseases, the users of such systems are patients that follow strict dietary rules to manage such diseases. After receiving the input of the intake food, the system performs reasoning to understand whether the users follow an unhealthy behaviour. Successively, the system has to communicate the results in a clear and effective way, that is, the output message has to persuade users to follow the right dietary rules. In this paper, we address the main challenges to build such systems: i) the natural language generation of messages that explain the reasoner inconsistency; ii) the effectiveness of such messages at persuading the users. Results prove that the persuasive explanations are able to reduce the unhealthy users’ behaviours
- …