4,904 research outputs found

    Leveraging Semantic Web Service Descriptions for Validation by Automated Functional Testing

    Get PDF
    Recent years have seen the utilisation of Semantic Web Service descriptions for automating a wide range of service-related activities, with a primary focus on service discovery, composition, execution and mediation. An important area which so far has received less attention is service validation, whereby advertised services are proven to conform to required behavioural specifications. This paper proposes a method for validation of service-oriented systems through automated functional testing. The method leverages ontology-based and rule-based descriptions of service inputs, outputs, preconditions and effects (IOPE) for constructing a stateful EFSM specification. The specification is subsequently utilised for functional testing and validation using the proven Stream X-machine (SXM) testing methodology. Complete functional test sets are generated automatically at an abstract level and are then applied to concrete Web services, using test drivers created from the Web service descriptions. The testing method comes with completeness guarantees and provides a strong method for validating the behaviour of Web services

    ArCo: the Italian Cultural Heritage Knowledge Graph

    Full text link
    ArCo is the Italian Cultural Heritage knowledge graph, consisting of a network of seven vocabularies and 169 million triples about 820 thousand cultural entities. It is distributed jointly with a SPARQL endpoint, a software for converting catalogue records to RDF, and a rich suite of documentation material (testing, evaluation, how-to, examples, etc.). ArCo is based on the official General Catalogue of the Italian Ministry of Cultural Heritage and Activities (MiBAC) - and its associated encoding regulations - which collects and validates the catalogue records of (ideally) all Italian Cultural Heritage properties (excluding libraries and archives), contributed by CH administrators from all over Italy. We present its structure, design methods and tools, its growing community, and delineate its importance, quality, and impact

    Semantic Support for Log Analysis of Safety-Critical Embedded Systems

    Full text link
    Testing is a relevant activity for the development life-cycle of Safety Critical Embedded systems. In particular, much effort is spent for analysis and classification of test logs from SCADA subsystems, especially when failures occur. The human expertise is needful to understand the reasons of failures, for tracing back the errors, as well as to understand which requirements are affected by errors and which ones will be affected by eventual changes in the system design. Semantic techniques and full text search are used to support human experts for the analysis and classification of test logs, in order to speedup and improve the diagnosis phase. Moreover, retrieval of tests and requirements, which can be related to the current failure, is supported in order to allow the discovery of available alternatives and solutions for a better and faster investigation of the problem.Comment: EDCC-2014, BIG4CIP-2014, Embedded systems, testing, semantic discovery, ontology, big dat

    LiteMat: a scalable, cost-efficient inference encoding scheme for large RDF graphs

    Full text link
    The number of linked data sources and the size of the linked open data graph keep growing every day. As a consequence, semantic RDF services are more and more confronted with various "big data" problems. Query processing in the presence of inferences is one them. For instance, to complete the answer set of SPARQL queries, RDF database systems evaluate semantic RDFS relationships (subPropertyOf, subClassOf) through time-consuming query rewriting algorithms or space-consuming data materialization solutions. To reduce the memory footprint and ease the exchange of large datasets, these systems generally apply a dictionary approach for compressing triple data sizes by replacing resource identifiers (IRIs), blank nodes and literals with integer values. In this article, we present a structured resource identification scheme using a clever encoding of concepts and property hierarchies for efficiently evaluating the main common RDFS entailment rules while minimizing triple materialization and query rewriting. We will show how this encoding can be computed by a scalable parallel algorithm and directly be implemented over the Apache Spark framework. The efficiency of our encoding scheme is emphasized by an evaluation conducted over both synthetic and real world datasets.Comment: 8 pages, 1 figur

    Data protection regulation ontology for compliance

    Get PDF
    The GDPR is the current data protection regulation in Europe. A significant market demand has been created ever since GDPR came into force. This is mostly due to the fact that it can go outside of European borders if the data processed belongs to European citizens. The number of companies who require some type of regulation or standard compliance is ever-increasing and the need for cyber security and privacy specialists has never been greater. Moreover, the GDPR has inspired a series of similar regulations all over the world. This further increases the market demand and makes the work of companies who work internationally more complicated and difficult to scale. The purpose of this thesis is to help consultancy companies to automate their work by using semantic structures known as ontologies. By doing this, they can increase productivity and reduce costs. Ontologies can store data and their semantics (meaning) in a machine-readable format. In this thesis, an ontology has been designed which is meant to help consultants generate checklists (or runbooks) which they are required to deliver to their clients. The ontology is designed to handle concepts such as security measures, company information, company architecture, data sensitivity, privacy mechanisms, distinction between technical and organisational measures, and even conditionality. The ontology was evaluated using a litmus test. In the context of this ontology, the litmus test was composed of a collection of competency questions. Competency questions were collected based on the use-cases of the ontology. These questions were later translated to SPARQL queries which were run against a test ontology. The ontology has successfully passed the given litmus test. Thus, it can be concluded that the implemented functionality matches the proposed design
    corecore