427,876 research outputs found
Recommended from our members
Information, knowledge and the context of interaction
Representing knowledge as information content alone is insufficient in providing us with an understanding of the world around us. A combination of context as well as reasoning of the information content is fundamental to representing knowledge, within information –based systems. The field of knowledge representation and knowledge management has thus far been concerned with providing structures and theories that can lead to some form of qualified intelligent reasoning, and contextualised information. By drawing upon previous research and applying and extending concepts of Semiotics and Symbiosis from the interaction design school of thought, this paper presents a conceptual framework for establishing the interplay between knowledge and users of knowledge via information systems constructs. Subsequently, by drawing upon notions of interfaces to knowledge, a conceptual framework which describes the relationship between the semiotic, symbiotic and interface to knowledge presented, along with a discussion of contemporary issues common to the field of knowledge management is developed
A Diagram Is Worth A Dozen Images
Diagrams are common tools for representing complex concepts, relationships
and events, often when it would be difficult to portray the same information
with natural images. Understanding natural images has been extensively studied
in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning,
the challenging task of identifying the structure of a diagram and the
semantics of its constituents and their relationships. We introduce Diagram
Parse Graphs (DPG) as our representation to model the structure of diagrams. We
define syntactic parsing of diagrams as learning to infer DPGs for diagrams and
study semantic interpretation and reasoning of diagrams in the context of
diagram question answering. We devise an LSTM-based method for syntactic
parsing of diagrams and introduce a DPG-based attention model for diagram
question answering. We compile a new dataset of diagrams with exhaustive
annotations of constituents and relationships for over 5,000 diagrams and
15,000 questions and answers. Our results show the significance of our models
for syntactic parsing and question answering in diagrams using DPGs
Resolving conflicts in clinical guidelines using argumentation
Automatically reasoning with conflicting generic clinical guidelines is a burning issue in patient-centric medical reasoning where patient-specific conditions and goals need to be taken into account. It is even more challenging in the presence of preferences such as patient's wishes and clinician's priorities over goals. We advance a structured argumentation formalism for reasoning with conflicting clinical guidelines, patient-specific information and preferences. Our formalism integrates assumption-based reasoning and goal-driven selection among reasoning outcomes. Specifically, we assume applicability of guideline recommendations concerning the generic goal of patient well-being, resolve conflicts among recommendations using patient's conditions and preferences, and then consider prioritised patient-centered goals to yield non-conflicting, goal-maximising and preference-respecting recommendations. We rely on the state-of-the-art Transition-based Medical Recommendation model for representing guideline recommendations and augment it with context given by the patient's conditions, goals, as well as preferences over recommendations and goals. We establish desirable properties of our approach in terms of sensitivity to recommendation conflicts and patient context
Towards Correctness of Program Transformations Through Unification and Critical Pair Computation
Correctness of program transformations in extended lambda calculi with a
contextual semantics is usually based on reasoning about the operational
semantics which is a rewrite semantics. A successful approach to proving
correctness is the combination of a context lemma with the computation of
overlaps between program transformations and the reduction rules, and then of
so-called complete sets of diagrams. The method is similar to the computation
of critical pairs for the completion of term rewriting systems. We explore
cases where the computation of these overlaps can be done in a first order way
by variants of critical pair computation that use unification algorithms. As a
case study we apply the method to a lambda calculus with recursive
let-expressions and describe an effective unification algorithm to determine
all overlaps of a set of transformations with all reduction rules. The
unification algorithm employs many-sorted terms, the equational theory of
left-commutativity modelling multi-sets, context variables of different kinds
and a mechanism for compactly representing binding chains in recursive
let-expressions.Comment: In Proceedings UNIF 2010, arXiv:1012.455
Defining and Modeling Context in a Multi-Agent Systems Architecture for Decision-Making
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.143.652Ambient intelligence involves the convergence of several computing areas: ubiquitous computing, intelligent systems and context-awareness. Developing context-aware applications needs facilities for recognizing and representing context, reasoning on it and adapting to it accordingly. In what concerns context representation, the newest and most challenging representation is the ontological one. The problem is that current ontologies for context do not provide a standard for representing complex context attributes. In this paper, we propose a context definition and representation used to construct context-based agent architecture. The representation we propose combines the generality provided by ontologies with the complexity inspired by the object oriented models. The goal of the proposed architecture is to support the deployment of context-aware agents able to learn how to recognize the context of their decisions and to adapt to it. The use of this architecture is illustrated on a test MAS for agenda management, using the JADE-LEAP platform on PCs and PDAs
Computing overlappings by unification in the deterministic lambda calculus LR with letrec, case, constructors, seq and variable chains
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules.The method is similar to the computation of critical pairs for the completion of term rewriting systems. We describe an effective unification algorithm to determine all overlaps of transformations with reduction rules for the lambda calculus LR which comprises a recursive let-expressions, constructor applications, case expressions and a seq construct for strict evaluation. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modeling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions. As a result the algorithm computes a finite set of overlappings for the reduction rules of the calculus LR that serve as a starting point to the automatization of the analysis of program transformations
On Reasoning with RDF Statements about Statements using Singleton Property Triples
The Singleton Property (SP) approach has been proposed for representing and
querying metadata about RDF triples such as provenance, time, location, and
evidence. In this approach, one singleton property is created to uniquely
represent a relationship in a particular context, and in general, generates a
large property hierarchy in the schema. It has become the subject of important
questions from Semantic Web practitioners. Can an existing reasoner recognize
the singleton property triples? And how? If the singleton property triples
describe a data triple, then how can a reasoner infer this data triple from the
singleton property triples? Or would the large property hierarchy affect the
reasoners in some way? We address these questions in this paper and present our
study about the reasoning aspects of the singleton properties. We propose a
simple mechanism to enable existing reasoners to recognize the singleton
property triples, as well as to infer the data triples described by the
singleton property triples. We evaluate the effect of the singleton property
triples in the reasoning processes by comparing the performance on RDF datasets
with and without singleton properties. Our evaluation uses as benchmark the
LUBM datasets and the LUBM-SP datasets derived from LUBM with temporal
information added through singleton properties
Revisiting Chase Termination for Existential Rules and their Extension to Nonmonotonic Negation
Existential rules have been proposed for representing ontological knowledge,
specifically in the context of Ontology- Based Data Access. Entailment with
existential rules is undecidable. We focus in this paper on conditions that
ensure the termination of a breadth-first forward chaining algorithm known as
the chase. Several variants of the chase have been proposed. In the first part
of this paper, we propose a new tool that allows to extend existing acyclicity
conditions ensuring chase termination, while keeping good complexity
properties. In the second part, we study the extension to existential rules
with nonmonotonic negation under stable model semantics, discuss the relevancy
of the chase variants for these rules and further extend acyclicity results
obtained in the positive case.Comment: This paper appears in the Proceedings of the 15th International
Workshop on Non-Monotonic Reasoning (NMR 2014
- …