444 research outputs found

    Completing the Is-a Structure in Description Logics Ontologies

    Full text link

    Reasoning-Supported Quality Assurance for Knowledge Bases

    Get PDF
    The increasing application of ontology reuse and automated knowledge acquisition tools in ontology engineering brings about a shift of development efforts from knowledge modeling towards quality assurance. Despite the high practical importance, there has been a substantial lack of support for ensuring semantic accuracy and conciseness. In this thesis, we make a significant step forward in ontology engineering by developing a support for two such essential quality assurance activities

    Automated model-based spreadsheet debugging

    Get PDF
    Spreadsheets are interactive data organization and calculation programs that are developed in spreadsheet environments like Microsoft Excel or LibreOffice Calc. They are probably the most successful example of end-user developed software and are utilized in almost all branches and at all levels of companies. Although spreadsheets often support important decision making processes, they are, like all software, prone to error. In several cases, faults in spreadsheets have caused severe losses of money. Spreadsheet developers are usually not educated in the practices of software development. As they are thus not familiar with quality control methods like systematic testing or debugging, they have to be supported by the spreadsheet environment itself to search for faults in their calculations in order to ensure the correctness and a better overall quality of the developed spreadsheets. This thesis by publication introduces several approaches to locate faults in spreadsheets. The presented approaches are based on the principles of Model-Based Diagnosis (MBD), which is a technique to find the possible reasons why a system does not behave as expected. Several new algorithmic enhancements of the general MBD approach are combined in this thesis to allow spreadsheet users to debug their spreadsheets and to efficiently find the reason of the observed unexpected output values. In order to assure a seamless integration into the environment that is well-known to the spreadsheet developers, the presented approaches are implemented as an extension for Microsoft Excel. The first part of the thesis outlines the different algorithmic approaches that are introduced in this thesis and summarizes the improvements that were achieved over the general MBD approach. In the second part, the appendix, a selection of the author's publications are presented. These publications comprise (a) a survey of the research in the area of spreadsheet quality assurance, (b) a work describing how to adapt the general MBD approach to spreadsheets, (c) two new algorithmic improvements of the general technique to speed up the calculation of the possible reasons of an observed fault, (d) a new concept and algorithm to efficiently determine questions that a user can be asked during debugging in order to reduce the number of possible reasons for the observed unexpected output values, and (e) a new method to find faults in a set of spreadsheets and a new corpus of real-world spreadsheets containing faults that can be used to evaluate the proposed debugging approaches

    Integration of Ontology Alignment and Ontology Debugging for Taxonomy Networks

    Full text link

    Knowledge base ontological debugging guided by linguistic evidence

    Get PDF
    Le résumé en français n'a pas été communiqué par l'auteur.When they grow in size, knowledge bases (KBs) tend to include sets of axioms which are intuitively absurd but nonetheless logically consistent. This is particularly true of data expressed in OWL, as part of the Semantic Web framework, which favors the aggregation of set of statements from multiple sources of knowledge, with overlapping signatures.Identifying nonsense is essential if one wants to avoid undesired inferences, but the sparse usage of negation within these datasets generally prevents the detection of such cases on a strict logical basis. And even if the KB is inconsistent, identifying the axioms responsible for the nonsense remains a non trivial task. This thesis investigates the usage of automatically gathered linguistic evidence in order to detect and repair violations of common sense within such datasets. The main intuition consists in exploiting distributional similarity between named individuals of an input KB, in order to identify consequences which are unlikely to hold if the rest of the KB does. Then the repair phase consists in selecting axioms to be preferably discarded (or at least amended) in order to get rid of the nonsense. A second strategy is also presented, which consists in strengthening the input KB with a foundational ontology, in order to obtain an inconsistency, before performing a form of knowledge base debugging/revision which incorporates this linguistic input. This last step may also be applied directly to an inconsistent input KB. These propositions are evaluated with different sets of statements issued from the Linked Open Data cloud, as well as datasets of a higher quality, but which were automatically degraded for the evaluation. The results seem to indicate that distributional evidence may actually constitute a relevant common ground for deciding between conflicting axioms

    Knowledge base ontological debugging guided by linguistic evidence

    Get PDF
    Le résumé en français n'a pas été communiqué par l'auteur.When they grow in size, knowledge bases (KBs) tend to include sets of axioms which are intuitively absurd but nonetheless logically consistent. This is particularly true of data expressed in OWL, as part of the Semantic Web framework, which favors the aggregation of set of statements from multiple sources of knowledge, with overlapping signatures.Identifying nonsense is essential if one wants to avoid undesired inferences, but the sparse usage of negation within these datasets generally prevents the detection of such cases on a strict logical basis. And even if the KB is inconsistent, identifying the axioms responsible for the nonsense remains a non trivial task. This thesis investigates the usage of automatically gathered linguistic evidence in order to detect and repair violations of common sense within such datasets. The main intuition consists in exploiting distributional similarity between named individuals of an input KB, in order to identify consequences which are unlikely to hold if the rest of the KB does. Then the repair phase consists in selecting axioms to be preferably discarded (or at least amended) in order to get rid of the nonsense. A second strategy is also presented, which consists in strengthening the input KB with a foundational ontology, in order to obtain an inconsistency, before performing a form of knowledge base debugging/revision which incorporates this linguistic input. This last step may also be applied directly to an inconsistent input KB. These propositions are evaluated with different sets of statements issued from the Linked Open Data cloud, as well as datasets of a higher quality, but which were automatically degraded for the evaluation. The results seem to indicate that distributional evidence may actually constitute a relevant common ground for deciding between conflicting axioms

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    Knowledge-centric autonomic systems

    Get PDF
    Autonomic computing revolutionised the commonplace understanding of proactiveness in the digital world by introducing self-managing systems. Built on top of IBM’s structural and functional recommendations for implementing intelligent control, autonomic systems are meant to pursue high level goals, while adequately responding to changes in the environment, with a minimum amount of human intervention. One of the lead challenges related to implementing this type of behaviour in practical situations stems from the way autonomic systems manage their inner representation of the world. Specifically, all the components involved in the control loop have shared access to the system’s knowledge, which, for a seamless cooperation, needs to be kept consistent at all times.A possible solution lies with another popular technology of the 21st century, the Semantic Web,and the knowledge representation media it fosters, ontologies. These formal yet flexible descriptions of the problem domain are equipped with reasoners, inference tools that, among other functions, check knowledge consistency. The immediate application of reasoners in an autonomic context is to ensure that all components share and operate on a logically correct and coherent “view” of the world. At the same time, ontology change management is a difficult task to complete with semantic technologies alone, especially if little to no human supervision is available. This invites the idea of delegating change management to an autonomic manager, as the intelligent control loop it implements is engineered specifically for that purpose.Despite the inherent compatibility between autonomic computing and semantic technologies,their integration is non-trivial and insufficiently investigated in the literature. This gap represents the main motivation for this thesis. Moreover, existing attempts at provisioning autonomic architectures with semantic engines represent bespoke solutions for specific problems (load balancing in autonomic networking, deconflicting high level policies, informing the process of correlating diverse enterprise data are just a few examples). The main drawback of these efforts is that they only provide limited scope for reuse and cross-domain analysis (design guidelines, useful architectural models that would scale well across different applications and modular components that could be integrated in other systems seem to be poorly represented). This work proposes KAS (Knowledge-centric Autonomic System), a hybrid architecture combining semantic tools such as: • an ontology to capture domain knowledge,• a reasoner to maintain domain knowledge consistent as well as infer new knowledge, • a semantic querying engine,• a tool for semantic annotation analysis with a customised autonomic control loop featuring: • a novel algorithm for extracting knowledge authored by the domain expert, • “software sensors” to monitor user requests and environment changes, • a new algorithm for analysing the monitored changes, matching them against known patterns and producing plans for taking the necessary actions, • “software effectors” to implement the planned changes and modify the ontology accordingly. The purpose of KAS is to act as a blueprint for the implementation of autonomic systems harvesting semantic power to improve self-management. To this end, two KAS instances were built and deployed in two different problem domains, namely self-adaptive document rendering and autonomic decision2support for career management. The former case study is intended as a desktop application, whereas the latter is a large scale, web-based system built to capture and manage knowledge sourced by an entire (relevant) community. The two problems are representative for their own application classes –namely desktop tools required to respond in real time and, respectively, online decision support platforms expected to process large volumes of data undergoing continuous transformation – therefore, they were selected to demonstrate the cross-domain applicability (that state of the art approaches tend to lack) of the proposed architecture. Moreover, analysing KAS behaviour in these two applications enabled the distillation of design guidelines and of lessons learnt from practical implementation experience while building on and adapting state of the art tools and methodologies from both fields.KAS is described and analysed from design through to implementation. The design is evaluated using ATAM (Architecture Trade off Analysis Method) whereas the performance of the two practical realisations is measured both globally as well as deconstructed in an attempt to isolate the impact of each autonomic and semantic component. This last type of evaluation employs state of the art metrics for each of the two domains. The experimental findings show that both instances of the proposed hybrid architecture successfully meet the prescribed high-level goals and that the semantic components have a positive influence on the system’s autonomic behaviour

    Ontological approach to derive product configurations from a Software Product Line Reference Architecture

    Get PDF
    Abstract Software Product Lines (SPL) based on reuse, claim to improve evolution, time to market and decrease software development costs. Concrete software products or systems, members of the SPL family, are derived by instantiating a generic Reference Architecture (RA), holding common and variant components. The construction of RA is a complex and costly task, as well as its usage for product derivation, due to the huge number of variants, essentially caused by non functional requirements variability. In consequence, the selection of an RA instance or Feasible Solution (FS), meeting RA constraints and customer requirements, is not straightforward. In this work RA is built by a bottom-up process from existing products; RA and its instances are represented by a non-directed connected graph. The HIS-RA Ontology also represents RA and captures Healthcare Integrated Information Systems (HIS) domain knowledge. Moreover, FS must be connected (the induced graph by FS in RA has no isolated components), consistent (it verifies consistency rules among FS components), and working (it meets domain functional (FR) and non functional (NFR) requirements). The main goal of this paper is to define a semiautomatic process (FFSP), to derive consistency rules using the HIS-RA Ontology built-in reasoning capabilities, to construct consistent, connected and working FS. Software quality is considered by FFSP in the traceability between FR and NFR, and it is specified by ISO/IEC 25010, to guarantee RA evolution and the overall concrete product configuration quality. FFSP is validated on a HIS domain a case study
    • …
    corecore