37,190 research outputs found

    Guidelines for a Dynamic Ontology - Integrating Tools of Evolution and Versioning in Ontology

    Full text link
    Ontologies are built on systems that conceptually evolve over time. In addition, techniques and languages for building ontologies evolve too. This has led to numerous studies in the field of ontology versioning and ontology evolution. This paper presents a new way to manage the lifecycle of an ontology incorporating both versioning tools and evolution process. This solution, called VersionGraph, is integrated in the source ontology since its creation in order to make it possible to evolve and to be versioned. Change management is strongly related to the model in which the ontology is represented. Therefore, we focus on the OWL language in order to take into account the impact of the changes on the logical consistency of the ontology like specified in OWL DL

    A unified framework for building ontological theories with application and testing in the field of clinical trials

    Get PDF
    The objective of this research programme is to contribute to the establishment of the emerging science of Formal Ontology in Information Systems via a collaborative project involving researchers from a range of disciplines including philosophy, logic, computer science, linguistics, and the medical sciences. The re­searchers will work together on the construction of a unified formal ontology, which means: a general framework for the construction of ontological theories in specific domains. The framework will be constructed using the axiomatic-deductive method of modern formal ontology. It will be tested via a series of applications relating to on-going work in Leipzig on medical taxonomies and data dictionaries in the context of clinical trials. This will lead to the production of a domain-specific ontology which is designed to serve as a basis for applications in the medical field

    An extended ontology-based context model and manipulation calculus for dynamic web service processes

    Get PDF
    Services are oered in an execution context that is determined by how a provider provisions the service and how the user consumes it. The need for more exibility requires the provisioning and consumption aspects to be addressed at runtime. We propose an ontology-based context model providing a framework for service provisioning and consumption aspects and techniques for managing context constraints for Web service processes where dynamic context concerns can be monitored and validated at service process run-time. We discuss the contextualization of dynamically relevant aspects of Web service processes as our main goal, i.e. capture aspects in an extended context model. The technical contributions of this paper are a context model ontology for dynamic service contexts and an operator calculus for integrated and coherent context manipulation, composition and reasoning. The context model ontology formalizes dynamic aspects of Web services and facilitates reasoning. We present the context ontology in terms of four core dimensions - functional, QoS, domain and platform - which are internally interconnected

    On the Integration of Electrical/Electronic Product Data in the Automotive Domain

    Get PDF
    The recent innovation of modern cars has mainly been driven by the development of new as well as the continuous improvement of existing electrical and electronic (E/E) components, including sensors, actuators, and electronic control units. This trend has been accompanied by an increasing complexity of E/E components and their numerous interdependencies. In addition, external impact factors (e.g., changes of regulations, product innovations) demand for more sophisticated E/E product data management (E/E-PDM). Since E/E product data is usually scattered over a large number of distributed, heterogeneous IT systems, application-spanning use cases are difficult to realize (e.g., ensuring the consistency of artifacts corresponding to different development phases, plausibility of logical connections between electronic control units). To tackle this challenge, the partial integration of E/E product data as well as corresponding schemas becomes necessary. This paper presents the properties of a typical IT system landscape related to E/E-PDM, reveals challenges emerging in this context, and elicits requirements for E/E-PDM. Based on this, insights into our framework, which targets at the partial integration of E/E product data, are given. Such an integration will foster E/E product data integration and hence contribute to an improved E/E product quality

    Towards ensuring Satisfiability of Merged Ontology

    Get PDF
    AbstractThe last decade has seen researchers developing efficient algorithms for the mapping and merging of ontologies to meet the demands of interoperability between heterogeneous and distributed information systems. But, still state-of-the-art ontology mapping and merging systems is semi-automatic that reduces the burden of manual creation and maintenance of mappings, and need human intervention for their validation. The contribution presented in this paper makes human intervention one step more down by automatically identifying semantic inconsistencies in the early stages of ontology merging. Our methodology detects inconsistencies based on structural mismatches that occur due to conflicts among the set of Generalized Concept Inclusions, and Disjoint Relations due to the differences between disjoint partitions in the local heterogeneous ontologies. We present novel methodologies to detect and repair semantic inconsistencies from the list of initial mappings. This results in global merged ontology free from ‘circulatory error in class/property hierarchy’, „common class/instance between disjoint classes error’, ‘redundancy of subclass/subproperty relations’, ‘redundancy of disjoint relations’ and other types of „semantic inconsistency’ errors. In this way, our methodology saves time and cost of traversing local ontologies for the validation of mappings, improves performance by producing only consistent accurate mappings, and reduces the user dependability for ensuring the satisfiability and consistency of merged ontology. The experiments show that the newer approach with automatic inconsistency detection yields a significantly higher precision

    Expressing the tacit knowledge of a digital library system as linked data

    Get PDF
    Library organizations have enthusiastically undertaken semantic web initiatives and in particular the data publishing as linked data. Nevertheless, different surveys report the experimental nature of initiatives and the consumer difficulty in re-using data. These barriers are a hindrance for using linked datasets, as an infrastructure that enhances the library and related information services. This paper presents an approach for encoding, as a Linked Vocabulary, the "tacit" knowledge of the information system that manages the data source. The objective is the improvement of the interpretation process of the linked data meaning of published datasets. We analyzed a digital library system, as a case study, for prototyping the "semantic data management" method, where data and its knowledge are natively managed, taking into account the linked data pillars. The ultimate objective of the semantic data management is to curate the correct consumers' interpretation of data, and to facilitate the proper re-use. The prototype defines the ontological entities representing the knowledge, of the digital library system, that is not stored in the data source, nor in the existing ontologies related to the system's semantics. Thus we present the local ontology and its matching with existing ontologies, Preservation Metadata Implementation Strategies (PREMIS) and Metadata Objects Description Schema (MODS), and we discuss linked data triples prototyped from the legacy relational database, by using the local ontology. We show how the semantic data management, can deal with the inconsistency of system data, and we conclude that a specific change in the system developer mindset, it is necessary for extracting and "codifying" the tacit knowledge, which is necessary to improve the data interpretation process

    Knowledge-infused and Consistent Complex Event Processing over Real-time and Persistent Streams

    Full text link
    Emerging applications in Internet of Things (IoT) and Cyber-Physical Systems (CPS) present novel challenges to Big Data platforms for performing online analytics. Ubiquitous sensors from IoT deployments are able to generate data streams at high velocity, that include information from a variety of domains, and accumulate to large volumes on disk. Complex Event Processing (CEP) is recognized as an important real-time computing paradigm for analyzing continuous data streams. However, existing work on CEP is largely limited to relational query processing, exposing two distinctive gaps for query specification and execution: (1) infusing the relational query model with higher level knowledge semantics, and (2) seamless query evaluation across temporal spaces that span past, present and future events. These allow accessible analytics over data streams having properties from different disciplines, and help span the velocity (real-time) and volume (persistent) dimensions. In this article, we introduce a Knowledge-infused CEP (X-CEP) framework that provides domain-aware knowledge query constructs along with temporal operators that allow end-to-end queries to span across real-time and persistent streams. We translate this query model to efficient query execution over online and offline data streams, proposing several optimizations to mitigate the overheads introduced by evaluating semantic predicates and in accessing high-volume historic data streams. The proposed X-CEP query model and execution approaches are implemented in our prototype semantic CEP engine, SCEPter. We validate our query model using domain-aware CEP queries from a real-world Smart Power Grid application, and experimentally analyze the benefits of our optimizations for executing these queries, using event streams from a campus-microgrid IoT deployment.Comment: 34 pages, 16 figures, accepted in Future Generation Computer Systems, October 27, 201

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    A semantic and agent-based approach to support information retrieval, interoperability and multi-lateral viewpoints for heterogeneous environmental databases

    Get PDF
    PhDData stored in individual autonomous databases often needs to be combined and interrelated. For example, in the Inland Water (IW) environment monitoring domain, the spatial and temporal variation of measurements of different water quality indicators stored in different databases are of interest. Data from multiple data sources is more complex to combine when there is a lack of metadata in a computation forin and when the syntax and semantics of the stored data models are heterogeneous. The main types of information retrieval (IR) requirements are query transparency and data harmonisation for data interoperability and support for multiple user views. A combined Semantic Web based and Agent based distributed system framework has been developed to support the above IR requirements. It has been implemented using the Jena ontology and JADE agent toolkits. The semantic part supports the interoperability of autonomous data sources by merging their intensional data, using a Global-As-View or GAV approach, into a global semantic model, represented in DAML+OIL and in OWL. This is used to mediate between different local database views. The agent part provides the semantic services to import, align and parse semantic metadata instances, to support data mediation and to reason about data mappings during alignment. The framework has applied to support information retrieval, interoperability and multi-lateral viewpoints for four European environmental agency databases. An extended GAV approach has been developed and applied to handle queries that can be reformulated over multiple user views of the stored data. This allows users to retrieve data in a conceptualisation that is better suited to them rather than to have to understand the entire detailed global view conceptualisation. User viewpoints are derived from the global ontology or existing viewpoints of it. This has the advantage that it reduces the number of potential conceptualisations and their associated mappings to be more computationally manageable. Whereas an ad hoc framework based upon conventional distributed programming language and a rule framework could be used to support user views and adaptation to user views, a more formal framework has the benefit in that it can support reasoning about the consistency, equivalence, containment and conflict resolution when traversing data models. A preliminary formulation of the formal model has been undertaken and is based upon extending a Datalog type algebra with hierarchical, attribute and instance value operators. These operators can be applied to support compositional mapping and consistency checking of data views. The multiple viewpoint system was implemented as a Java-based application consisting of two sub-systems, one for viewpoint adaptation and management, the other for query processing and query result adjustment
    corecore