5,676 research outputs found

    Using Description Logics for RDF Constraint Checking and Closed-World Recognition

    Full text link
    RDF and Description Logics work in an open-world setting where absence of information is not information about absence. Nevertheless, Description Logic axioms can be interpreted in a closed-world setting and in this setting they can be used for both constraint checking and closed-world recognition against information sources. When the information sources are expressed in well-behaved RDF or RDFS (i.e., RDF graphs interpreted in the RDF or RDFS semantics) this constraint checking and closed-world recognition is simple to describe. Further this constraint checking can be implemented as SPARQL querying and thus effectively performed.Comment: Extended version of a paper of the same name that will appear in AAAI-201

    Interoperability in IoT through the semantic profiling of objects

    Get PDF
    The emergence of smarter and broader people-oriented IoT applications and services requires interoperability at both data and knowledge levels. However, although some semantic IoT architectures have been proposed, achieving a high degree of interoperability requires dealing with a sea of non-integrated data, scattered across vertical silos. Also, these architectures do not fit into the machine-to-machine requirements, as data annotation has no knowledge on object interactions behind arriving data. This paper presents a vision of how to overcome these issues. More specifically, the semantic profiling of objects, through CoRE related standards, is envisaged as the key for data integration, allowing more powerful data annotation, validation, and reasoning. These are the key blocks for the development of intelligent applications.Portuguese Science and Technology Foundation (FCT) [UID/MULTI/00631/2013

    Semantic security: specification and enforcement of semantic policies for security-driven collaborations

    Get PDF
    Collaborative research can often have demands on finer-grained security that go beyond the authentication-only paradigm as typified by many e-Infrastructure/Grid based solutions. Supporting finer-grained access control is often essential for domains where the specification and subsequent enforcement of authorization policies is needed. The clinical domain is one area in particular where this is so. However it is the case that existing security authorization solutions are fragile, inflexible and difficult to establish and maintain. As a result they often do not meet the needs of real world collaborations where robustness and flexibility of policy specification and enforcement, and ease of maintenance are essential. In this paper we present results of the JISC funded Advanced Grid Authorisation through Semantic Technologies (AGAST) project (www.nesc.ac.uk/hub/projects/agast) and show how semantic-based approaches to security policy specification and enforcement can address many of the limitations with existing security solutions. These are demonstrated into the clinical trials domain through the MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project (www.nesc.ac.uk/hub/projects/votes) and the epidemiological domain through the JISC funded SeeGEO project (www.nesc.ac.uk/hub/projects/seegeo)

    The pragmatic proof: hypermedia API composition and execution

    Get PDF
    Machine clients are increasingly making use of the Web to perform tasks. While Web services traditionally mimic remote procedure calling interfaces, a new generation of so-called hypermedia APIs works through hyperlinks and forms, in a way similar to how people browse the Web. This means that existing composition techniques, which determine a procedural plan upfront, are not sufficient to consume hypermedia APIs, which need to be navigated at runtime. Clients instead need a more dynamic plan that allows them to follow hyperlinks and use forms with a preset goal. Therefore, in this paper, we show how compositions of hypermedia APIs can be created by generic Semantic Web reasoners. This is achieved through the generation of a proof based on semantic descriptions of the APIs' functionality. To pragmatically verify the applicability of compositions, we introduce the notion of pre-execution and post-execution proofs. The runtime interaction between a client and a server is guided by proofs but driven by hypermedia, allowing the client to react to the application's actual state indicated by the server's response. We describe how to generate compositions from descriptions, discuss a computer-assisted process to generate descriptions, and verify reasoner performance on various composition tasks using a benchmark suite. The experimental results lead to the conclusion that proof-based consumption of hypermedia APIs is a feasible strategy at Web scale.Peer ReviewedPostprint (author's final draft

    Km4City Ontology Building vs Data Harvesting and Cleaning for Smart-city Services

    Get PDF
    Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-Store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection

    Correcting Knowledge Base Assertions

    Get PDF
    The usefulness and usability of knowledge bases (KBs) is often limited by quality issues. One common issue is the presence of erroneous assertions, often caused by lexical or semantic confusion. We study the problem of correcting such assertions, and present a general correction framework which combines lexical matching, semantic embedding, soft constraint mining and semantic consistency checking. The framework is evaluated using DBpedia and an enterprise medical KB

    Context Aware Computing for The Internet of Things: A Survey

    Get PDF
    As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201
    corecore