1,086 research outputs found
Reason Maintenance - Conceptual Framework
This paper describes the conceptual framework for reason maintenance developed as part of
WP2
Default Conceptual Graph Rules: Preliminary Results for an Agronomy Application
International audienceIn this paper, we extend Simple Conceptual Graphs with Reiter's default rules. The motivation for this extension came from the type of reasonings involved in an agronomy application, namely the simulation of food processing. Our contribution is many fold: rst, the expressivity of this new language corresponds to our modeling purposes. Second, we provide an effective characterization of sound and complete reasonings in this language. Third, we identify a decidable subclass of Reiter's default logics. Last we identify our language as a superset of SREC-, and provide the lacking semantics for the latter language
Worst-case Optimal Query Answering for Greedy Sets of Existential Rules and Their Subclasses
The need for an ontological layer on top of data, associated with advanced
reasoning mechanisms able to exploit the semantics encoded in ontologies, has
been acknowledged both in the database and knowledge representation
communities. We focus in this paper on the ontological query answering problem,
which consists of querying data while taking ontological knowledge into
account. More specifically, we establish complexities of the conjunctive query
entailment problem for classes of existential rules (also called
tuple-generating dependencies, Datalog+/- rules, or forall-exists-rules. Our
contribution is twofold. First, we introduce the class of greedy
bounded-treewidth sets (gbts) of rules, which covers guarded rules, and their
most well-known generalizations. We provide a generic algorithm for query
entailment under gbts, which is worst-case optimal for combined complexity with
or without bounded predicate arity, as well as for data complexity and query
complexity. Secondly, we classify several gbts classes, whose complexity was
unknown, with respect to combined complexity (with both unbounded and bounded
predicate arity) and data complexity to obtain a comprehensive picture of the
complexity of existential rule fragments that are based on diverse guardedness
notions. Upper bounds are provided by showing that the proposed algorithm is
optimal for all of them
Improving the Forward Chaining Algorithm for Conceptual Graphs Rules
baget2004bInternational audienceSimple Conceptual Graphs (SGs) are used to represent entities and relations between these entities: they can be translated into positive, conjunctive, existential first-order logics, without function symbols. Sound and complete reasonings w.r.t. associated logic formulas are obtained through a kind of graph homomorphism called projection. Conceptual Graphs Rules (or CG rules) are a standard extension to SGs, keeping sound and complete reasonings w.r.t. associated logic formulas (they have the same form as tuple generating dependencies in database): these graphs represent knowledge of the form ''IF ... THEN''. We present here an optimization of the natural forward chaining algorithm for CG rules. Generating a graph of rules dependencies makes the following sequences of rule applications far more efficient, and the structure of this graph can be used to obtain new decidability results
Constructive Reasoning for Semantic Wikis
One of the main design goals of social software, such as wikis, is to
support and facilitate interaction and collaboration. This dissertation
explores challenges that arise from extending social software with
advanced facilities such as reasoning and semantic annotations and
presents tools in form of a conceptual model, structured tags, a rule
language, and a set of novel forward chaining and reason maintenance
methods for processing such rules that help to overcome the
challenges.
Wikis and semantic wikis were usually developed in an ad-hoc
manner, without much thought about the underlying concepts. A conceptual
model suitable for a semantic wiki that takes advanced features
such as annotations and reasoning into account is proposed. Moreover,
so called structured tags are proposed as a semi-formal knowledge
representation step between informal and formal annotations.
The focus of rule languages for the Semantic Web has been predominantly
on expert users and on the interplay of rule languages
and ontologies. KWRL, the KiWi Rule Language, is proposed as a
rule language for a semantic wiki that is easily understandable for
users as it is aware of the conceptual model of a wiki and as it
is inconsistency-tolerant, and that can be efficiently evaluated as it
builds upon Datalog concepts.
The requirement for fast response times of interactive software
translates in our work to bottom-up evaluation (materialization) of
rules (views) ahead of time – that is when rules or data change, not
when they are queried. Materialized views have to be updated when
data or rules change. While incremental view maintenance was intensively
studied in the past and literature on the subject is abundant,
the existing methods have surprisingly many disadvantages – they
do not provide all information desirable for explanation of derived
information, they require evaluation of possibly substantially larger
Datalog programs with negation, they recompute the whole extension
of a predicate even if only a small part of it is affected by a
change, they require adaptation for handling general rule changes.
A particular contribution of this dissertation consists in a set of
forward chaining and reason maintenance methods with a simple declarative
description that are efficient and derive and maintain information
necessary for reason maintenance and explanation. The reasoning
methods and most of the reason maintenance methods are described
in terms of a set of extended immediate consequence operators the
properties of which are proven in the classical logical programming
framework. In contrast to existing methods, the reason maintenance methods in this dissertation work by evaluating the original Datalog
program – they do not introduce negation if it is not present in the input
program – and only the affected part of a predicate’s extension is
recomputed. Moreover, our methods directly handle changes in both
data and rules; a rule change does not need to be handled as a special
case.
A framework of support graphs, a data structure inspired by justification
graphs of classical reason maintenance, is proposed. Support
graphs enable a unified description and a formal comparison of the
various reasoning and reason maintenance methods and define a notion
of a derivation such that the number of derivations of an atom is
always finite even in the recursive Datalog case.
A practical approach to implementing reasoning, reason maintenance,
and explanation in the KiWi semantic platform is also investigated. It
is shown how an implementation may benefit from using a graph
database instead of or along with a relational database
Object-oriented knowledge acquisition: Integrating construction of and reasoning in object-oriented knowledge bases
Päivikki Parpola presents in this research report the SeSKA (seamless structured knowledge acquisition) methodology, integrating phases of knowledge acquisition (KA) through seamless transformations between object-oriented (OO) models. This attacks the problem of disintegration, or the gap between phases. The methodology is accompanied by presentation of the SOOKAT (structured object-oriented knowledge acquisition) tool supporting it. SeSKA and SOOKAT extend the KA process to constructing knowledge bases by instantiating a series of models for inferencing. The models are constructed in SOOKAT utilizing metaobject protocols.
Inferences performed in instantiations of OO models are guided by control objects (CO). Messages are sent between COs and components of the inference structure. A specific CO, possibly using subordinate COs, can be specified for each inference strategy.
There exists a mutual CO for forward and backward chaining that can also be used when reasoning according to protocols. In addition, COs for problem-solving methods (PSMs), such as cover-and-differentiate or propose-and-revise, can be used.Three example applications are used for demonstrating the properties of the SeSKA methodology and SOOKAT, that is, a mineral classification "toy application", Sisyphus III rock classification and dietary management of multiple sclerosis.Mechanisms for importing problem-solving methods (PSMs) over the Internet, as well as for generating specific control objects (COs) for them, remain open to further development.
Päivikki Parpola (1965-2015) was a Ph.D. student at Aalto University. Her research interests concerned knowledge acquisition and presentation, development and reasoning in expert systems for different application fields, using the object-oriented paradigm. She received her M.Sc. in 1988 and Lic.Phil. in 1995 from the Department of Computer Science at the University of Helsinki. Her M.Sc. thesis concerned forming a formal grammar based on text samples of natural language or unknown writing. Research presented in her Lic.Phil. thesis continued in her Ph.D. studies. She worked with Nokia Research Center from 1987 to 1993. In addition to her thesis, she published multiple international and domestic conference papers and articles as well as contributed in European Union research project publications
Using Argumentation in a French Agrifood Chain Application: Technical Report
Evaluating food quality is a complex process since it relies on nu- merous criteria historically grouped into four main types: nutritional, sensorial, practical and hygienic qualities. They may be completed by other emerging preoccupations such as the environmental impact, eco- nomic phenomena, etc. However, all these aspects of quality and their various components are not always compatible and their simultaneous improvement is a problem that sometimes has no obvious solution, which corresponds to a real issue for decision making. This paper proposes a decision support method guided by the objectives de ned for the end products of an agrifood chain. It is materialized by a backward chaining approach based on argumentation
Managing Knowledge as Business Rules
In today’s business environment, it is a certainty that will manage to survive especially those organizations which are striving to adapt quickly and with low costs to the new demands of market competition. Knowledge represented by internal business rules of an organization can help crystallize their orientation in order to ensure a competitive advantage in the market. In this context and in a relatively short time, a new trend in software development has arisen, ex-tending current methods and putting a strong emphasis on business rules. This article outlines the importance of managing business rules in an organized manner using dedicated software products and furthermore presents a general prototype for a business rules repository.Business Rules, Management, Knowledge, Rule Engine, Repository Prototype
Information for decision-making is ubiquitous: Revisiting the reverse engineering mode in breadmaking technology
International audienceThis paper deals with the process of decision making in the reverse engineering mode and highlights the need for polyvalent information. Three aspects are considered. 1) Reverse engineering implies a preliminary assumption: having defined a desired outcome of the decision process. Defining goals on the possible outcomes is a complex, multi-actor process based on ubiquitous information. Once identified at best, several alternative scenarios may lead to the desired outcome. The first issue consists in evaluating these alternative scenarios. 2) While taking into consideration the positive consequences that the different alternatives will generate, the decision process has to allow for possible negative impacts, which are not explicitly expressed in the defined goals. We thus consider the reverse engineering process has to be bipolar and take rejections into account. 3) Finally, the simultaneous achievement (respectively, avoidance) of several goals (respectively, rejections) is not always possible and depends, in particular, on whether the actions leading to each of these goals (respectively avoiding these rejections) are compatible or not. We thus seek the " best " compatible set of actions and propose to define it as optimizing the bipolar preferences expressed on the outcomes. The approach is both graphical and logical and is focused on a case study in breadmaking technology
- …