746 research outputs found

    Grouping axioms for more coherent ontology descriptions

    Get PDF
    Ontologies and datasets for the Semantic Web are encoded in OWL formalisms that are not easily comprehended by people. To make ontologies accessible to human domain experts, several research groups have developed ontology verbalisers using Natural Language Generation. In practice ontologies are usually composed of simple axioms, so that realising them separately is relatively easy; there remains however the problem of producing texts that are coherent and efficient. We describe in this paper some methods for producing sentences that aggregate over sets of axioms that share the same logical structure. Because these methods are based on logical structure rather than domain-specific concepts or language-specific syntax, they are generic both as regards domain and language

    Dynamic integration of context model constraints in web service processes

    Get PDF
    Autonomic Web service composition has been a challenging topic for some years. The context in which composition takes places determines essential aspects. A context model can provide meaningful composition information for services process composition. An ontology-based approach for context information integration is the basis of a constraint approach to dynamically integrate context validation into service processes. The dynamic integration of context constraints into an orchestrated service process is a necessary direction to achieve autonomic service composition

    A meta level to LAG for adaptation language re-use

    Get PDF
    Recently, a growing body of research targets authoring of content and adaptation strategies for adaptive systems. The driving force behind it is semantics-based reuse: the same adaptation strategy can be used for various domains, and vice versa. E.g., a Java course can be taught via a strategy differentiating between beginner and advanced users, or between visual versus verbal users. Whilst using an Adaptation Language (LAG) to express reusable adaptation strategies, we noticed, however, that: a) the created strategies have common patterns that, themselves, could be reused; b) templates based on these patterns could reduce the designers' work; c) there is a strong preference towards XML-based processing and interfacing. This has lead us to define a new meta-language for the LAG Adaptation Language, facilitating the extraction of common design patterns. This paper provides more insight into the LAG language, as well as describes this meta-language, and shows how introducing it can overcome some redundancy issues

    Towards OpenMath Content Dictionaries as Linked Data

    Full text link
    "The term 'Linked Data' refers to a set of best practices for publishing and connecting structured data on the web". Linked Data make the Semantic Web work practically, which means that information can be retrieved without complicated lookup mechanisms, that a lightweight semantics enables scalable reasoning, and that the decentral nature of the Web is respected. OpenMath Content Dictionaries (CDs) have the same characteristics - in principle, but not yet in practice. The Linking Open Data movement has made a considerable practical impact: Governments, broadcasting stations, scientific publishers, and many more actors are already contributing to the "Web of Data". Queries can be answered in a distributed way, and services aggregating data from different sources are replacing hard-coded mashups. However, these services are currently entirely lacking mathematical functionality. I will discuss real-world scenarios, where today's RDF-based Linked Data do not quite get their job done, but where an integration of OpenMath would help - were it not for certain conceptual and practical restrictions. I will point out conceptual shortcomings in the OpenMath 2 specification and common bad practices in publishing CDs and then propose concrete steps to overcome them and to contribute OpenMath CDs to the Web of Data.Comment: Presented at the OpenMath Workshop 2010, http://cicm2010.cnam.fr/om

    WSDL-S: Adding Semantics to WSDL

    Get PDF
    Web services have primarily been designed for providing inter-operability between business applications. Current technologies assume a large amount of human interaction, for integrating two applications. This is primarily due to the fact that business process integration requires understanding of data and functions of the involved entities. Semantic Web technologies, powered by description logic based languages like OWL[1], aim to add greater meaning to Web content, by annotating the data with ontologies. Ontologies provide a mechanism of providing shared conceptualizations of domains. This allows agents to get an understanding of users’ Web content and greatly reduces human interaction for meaningful Web searches. A similar approach can be used for adding greater meaning to Web service descriptions, which will in turn, allow greater automation, by reducing human involvement for understanding the data and functions of the services

    S-RDF: A New RDF Serialization Format for Better Storage Without Losing Human Readability

    Get PDF
    International audienceNowadays, RDF data becomes more and more popular on the Web due to the advances of the Semantic Web and the Linked Open Data initiatives. Several works are focused on transforming relational databases to RDF by storing related data in N-Triple serialization format. However, these approaches do not take into account the existing normalization of their databases since N-Triple format allows data redundancy and does not control any normalization by itself. Moreover, the mostly used and recommended serialization formats, such as RDF/XML, Turtle, and HDT, have either high human-readability but waste storage capacity, or focus further on storage capacities while providing low human-readability. To overcome these limitations, we propose here a new serialization format, called S-RDF. By considering the structure (graph) and values of the RDF data separately, S-RDF reduces the duplicity of values by using unique identifiers. Results show an important improvement over the existing serialization formats in terms of storage (up to 71,66% w.r.t. N-Triples) and human readability

    Ontological interpretation of network monitoring data

    Get PDF
    Interpreting measurement and monitoring data from networks in general and the Internet in particular is a challenge. The motivation for this work has been to in- vestigate new ways to bridge the gap between the kind of data which are available and the more developed information which is needed by network stakeholders to support decision making and network management. Specific problems of syntax, semantics, conflicting data and modeling domain-specific knowledge have been identified. The methods developed and tested have used the Resource Descrip- tion Framework (rdf) and the ontology languages of the Semantic Web to bring together data from disparate sources into unified knowledgebases in two discrete case studies, both using real network data. Those knowledgebases have then been demonstrated to be usable and valuable sources of information about the networks concerned. Some success has been achieved in overcoming each of the identified problems using these techniques, proving the thesis that taking an ontological ap- proach to the processing of network monitoring data can be a very useful technique for overcoming problems of interpretation and for making information available to those who need it
    corecore