4,484 research outputs found
Approaches to Semantic Web Services: An Overview and Comparison
Abstract. The next Web generation promises to deliver Semantic Web Services (SWS); services that are self-described and amenable to automated discovery, composition and invocation. A prerequisite to this, however, is the emergence and evolution of the Semantic Web, which provides the infrastructure for the semantic interoperability of Web Services. Web Services will be augmented with rich formal descriptions of their capabilities, such that they can be utilized by applications or other services without human assistance or highly constrained agreements on interfaces or protocols. Thus, Semantic Web Services have the potential to change the way knowledge and business services are consumed and provided on the Web. In this paper, we survey the state of the art of current enabling technologies for Semantic Web Services. In addition, we characterize the infrastructure of Semantic Web Services along three orthogonal dimensions: activities, architecture and service ontology. Further, we examine and contrast three current approaches to SWS according to the proposed dimensions
Higher-order Representation and Reasoning for Automated Ontology Evolution
Abstract: The GALILEO system aims at realising automated ontology evolution. This is necessary to enable intelligent agents to manipulate their own knowledge autonomously and thus reason and communicate effectively in open, dynamic digital environments characterised by the heterogeneity of data and of representation languages. Our approach is based on patterns of diagnosis of faults detected across multiple ontologies. Such patterns allow to identify the type of repair required when conflicting ontologies yield erroneous inferences. We assume that each ontology is locally consistent, i.e. inconsistency arises only across ontologies when they are merged together. Local consistency avoids the derivation of uninteresting theorems, so the formula for diagnosis can essentially be seen as an open theorem over the ontologies. The system’s application domain is physics; we have adopted a modular formalisation of physics, structured by means of locales in Isabelle, to perform modular higher-order reasoning, and visualised by means of development graphs.
Language, logic and ontology: uncovering the structure of commonsense knowledge
The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural
language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic ‘discovery’ of a well-typed
ontology of commonsense knowledge, and the subsequent formulation of the longawaited goal of a meaning algebra
The Higher-Order Prover Leo-II.
Leo-II is an automated theorem prover for classical higher-order logic. The prover has pioneered cooperative higher-order-first-order proof automation, it has influenced the development of the TPTP THF infrastructure for higher-order logic, and it has been applied in a wide array of problems. Leo-II may also be called in proof assistants as an external aid tool to save user effort. For this it is crucial that Leo-II returns proof information in a standardised syntax, so that these proofs can eventually be transformed and verified within proof assistants. Recent progress in this direction is reported for the Isabelle/HOL system.The Leo-II project has been supported by the following grants: EPSRC grant EP/D070511/1 and DFG grants BE/2501 6-1, 8-1 and 9-1.This is the final version of the article. It first appeared from Springer via http://dx.doi.org/10.1007/s10817-015-9348-y
An information assistant system for the prevention of tunnel vision in crisis management
In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions
A Computational-Hermeneutic Approach for Conceptual Explicitation
We present a computer-supported approach for the logical analysis and
conceptual explicitation of argumentative discourse. Computational hermeneutics
harnesses recent progresses in automated reasoning for higher-order logics and
aims at formalizing natural-language argumentative discourse using flexible
combinations of expressive non-classical logics. In doing so, it allows us to
render explicit the tacit conceptualizations implicit in argumentative
discursive practices. Our approach operates on networks of structured arguments
and is iterative and two-layered. At one layer we search for logically correct
formalizations for each of the individual arguments. At the next layer we
select among those correct formalizations the ones which honor the argument's
dialectic role, i.e. attacking or supporting other arguments as intended. We
operate at these two layers in parallel and continuously rate sentences'
formalizations by using, primarily, inferential adequacy criteria. An
interpretive, logical theory will thus gradually evolve. This theory is
composed of meaning postulates serving as explications for concepts playing a
role in the analyzed arguments. Such a recursive, iterative approach to
interpretation does justice to the inherent circularity of understanding: the
whole is understood compositionally on the basis of its parts, while each part
is understood only in the context of the whole (hermeneutic circle). We
summarily discuss previous work on exemplary applications of human-in-the-loop
computational hermeneutics in metaphysical discourse. We also discuss some of
the main challenges involved in fully-automating our approach. By sketching
some design ideas and reviewing relevant technologies, we argue for the
technological feasibility of a highly-automated computational hermeneutics.Comment: 29 pages, 9 figures, to appear in A. Nepomuceno, L. Magnani, F.
Salguero, C. Bar\'es, M. Fontaine (eds.), Model-Based Reasoning in Science
and Technology. Inferential Models for Logic, Language, Cognition and
Computation, Series "Sapere", Springe
Practical applications of multi-agent systems in electric power systems
The transformation of energy networks from passive to active systems requires the embedding of intelligence within the network. One suitable approach to integrating distributed intelligent systems is multi-agent systems technology, where components of functionality run as autonomous agents capable of interaction through messaging. This provides loose coupling between components that can benefit the complex systems envisioned for the smart grid. This paper reviews the key milestones of demonstrated agent systems in the power industry and considers which aspects of agent design must still be addressed for widespread application of agent technology to occur
- …