9 research outputs found

    On the Multiple Roles of Ontologies in Explainable AI

    Get PDF
    This paper discusses the different roles that explicit knowledge, in particular ontologies, can play in Explainable AI and in the development of human-centric explainable systems and intelligible explanations. We consider three main perspectives in which ontologies can contribute significantly, namely reference modelling, common-sense reasoning, and knowledge refinement and complexity management. We overview some of the existing approaches in the literature, and we position them according to these three proposed perspectives. The paper concludes by discussing what challenges still need to be addressed to enable ontology-based approaches to explanation and to evaluate their human-understandability and effectiveness

    On the Multiple Roles of Ontologies in Explainable AI

    Get PDF
    This paper discusses the different roles that explicit knowledge, in particular ontologies, can play in Explainable AI and in the development of human-centric explainable systems and intelligible explanations. We consider three main perspectives in which ontologies can contribute significantly, namely reference modelling, common-sense reasoning, and knowledge refinement and complexity management. We overview some of the existing approaches in the literature, and we position them according to these three proposed perspectives. The paper concludes by discussing what challenges still need to be addressed to enable ontology-based approaches to explanation and to evaluate their human-understandability and effectiveness

    Structuring Abstraction to Achieve Ontology Modularisation

    Get PDF
    Large and complex ontologies lead to usage difficulties, thereby hampering the ontology developers’ tasks. Ontology modules have been proposed as a possible solution, which is supported by some algorithms and tools. However, the majority of types of modules, including those based on abstraction, still rely on manual methods for modularisation. Toward filling this gap in modularisation techniques, we systematised abstractions and selected five types of abstractions relevant for modularisation for which we created novel algorithms, implemented them, and wrapped it in a GUI, called NOMSA, to facilitate their use by ontology developers. The algorithms were evaluated quantitatively by assessing the quality of the generated modules. The quality of a module is measured by comparing it to the benchmark metrics from an existing framework for ontology modularisation. The results show that module’s quality ranges between average to good, whilst also eliminating manual intervention

    Raisonnement sur les modèles : détection et isolation d'anomalies dans les systèmes de diagnostic

    Get PDF
    Dans le cadre du diagnostic à base de Modèle, un ensemble de règles d'inférence est typiquement exploité pour calculer des diagnostics, ceci en utilisant une théorie scientifique et mathématique sur le système à diagnostiquer, ainsi qu'un ensemble d'observations. Contrairement aux hypothèses classiques, les Modèles sont souvent anormaux vis-à-vis d'un ensemble de propriétés requises. Naturellement, cela affecte la qualité des diagnostics [à Airbus]. Une théorie sur la réalité, l'information et la cognition est créé pour redéfinir, dans une perspective basée sur la théorie des modèles, le cadre classique de diagnostic à base de Modèle. Ceci rend possible la formalisation des anomalies et de leur relation avec des propriétés des diagnostics. Avec ce travail et avec l'idée qu'un système de diagnostic implémenté peut être vu comme un objet à diagnostiquer, une théorie de méta-diagnostic est développée, permettant la détection et isolation d'anomalies dans les Modèles des systèmes de diagnostic. Cette théorie est mise en pratique à travers d'un outil, MEDITO; et est testée avec succès à travers un ensemble de problèmes industriels, à Airbus. Comme des différents systèmes de diagnostic Airbus, souffrant d'anomalies variées, peuvent calculer des diagnostics différents, un ensemble de méthodes et outils et développé pour: 1) déterminer la cohérence entre diagnostics et 2) valider et comparer la performance de ces systèmes de diagnostic. Ce travail dépend d'un pont original entre le cadre de diagnostic Airbus et son équivalent académique. Finalement, la théorie de méta-diagnostic est généralisée pour prendre en compte des méta-systèmes autres que des systèmes de diagnostic implémentés.In Model-Based Diagnosis, a set of inference rules is typically used to compute diagnoses using a scientific and mathematical theory about a system under study and some observations. Contrary to the classical hypothesis, it is often the case that these Models are abnormal with respect to a series of required properties, hence affecting the quality of the computed diagnoses with possibly huge economical consequences, in particular at Airbus. A thesis on reality and cognition is firstly used to redefine the classic framework of model-based diagnosis from a formal model-theoretic perspective. This, in turn, enables the formalisation of abnormalities and of their relation with the properties diagnoses. With such material and the idea that an implemented diagnostic system can be seen a real-world artefact to be diagnosed, a theory of meta-diagnosis is developed, enabling the detection and isolation of abnormalities in Models of diagnostic systems and explanation in general. Such theory is then encoded in a tool, called MEDITO, and successfuly tested against Airbus real-world industrial problems. Moreover, as different heterogeneous implemented Airbus diagnostic systems, suffering from distinct abnormalities, may compute different diagnoses, methods and tools are developed for: 1) checking the consistency between subsystem-level diagnoses and 2) validating and comparing the performance of these diagnostic systems. Such work relies on an original bridge between the Airbus framework of diagnosis and its academic counterpart. Finally, meta-diagnosis is generalised to handle meta-systems other than implemented diagnostic systems

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques

    Etude de la connaissance dans le cadre d'observations partielles : La logique de l'observation

    Get PDF
    We focus on the study of knowledge obtained by performing partial observations of a system. The result of those observations can be structured by comparing their informationalcontent, and this is how we define what we call representations. It is also possible to study the possible relationships existing between different observations methods, which leads to the existence of functions between representations.With this formalism, we make a logical study of the way information behaves in such a context. The first point which arises from this is that one has to use intuitionistic logic, since propositions are about facts and not beliefs, and the addition of information does not change their veracity.This logic is extended by adding “modal” operators, which correspond to different ways of observing a system and express the fact that a piece of information is accessible or not from a given point of view. Depending on the constraints which are applied to our structures, one gets several behaviors for those operators and as many logics, all similar to the modal logic IS4.The basic postulate, that is a system is studied by observing it, is extremely general. However, our study shows that this leads to a rather weak logic, since neither the excluded middle principle nor the modal axiom 5 can be verified, even if extra hypotheses are considered. This means that the only important elements are the results of observations, and any reasoning is done constructively from them. As a consequence, the non-observation of a fact can not be used as a proof of its negation.In conclusion, the only elements which have to be taken into account when observing and studying nature are the observations which can be done of it, and any knowledge is obtained deductively from it.On s’intéresse à la connaissance que l’on peut avoir d’un système en se basant uniquement sur des observations que l’on peut en faire et où certaines informations peuvent rester cachées. On peut structurer ces observations en comparant leur contenu et la quantité d’informations qu’elles fournissent, pour obtenir ce que nous nommons des représentations. On peut de plus étudier les relations existant entre les différentes façons d’observer un même système, pour obtenir certaines fonctions reliant les représentations entre elles.Avec ce formalisme, on se livre à une étude logique du comportement de l’information pour cette approche. Le premier résultat est que l’on se base sur la logique intuitionniste, puisque les propositions que l’on considère expriment des connaissances sûres, et que l’ajout d’information n’en modifie pas la véracité.On étend cette logique en utilisant des opérateurs “modaux” pour symboliser les différentes façons d’observer le système et exprimer le fait qu’une information est accessible ou non depuis le point de vue correspondant. Suivant les contraintes que l’on impose, on obtient plusieurs com- portements de ces opérateurs dont découlent plusieurs logiques proches de la logique nommée IS4.Le postulat de base utilisé (on étudie un système en l’observant) est très général. Or, notre étude montre que cela impose une logique relativement faible, puisque ni le tiers-exclus, ni l’axiome modal 5 ne sont vérifiés, et ne peuvent l’être même en ajoutant des hypothèses. Cela signifie que seuls les éléments que l’on manipule, soit les résultats d’observations, sont importants. On est donc obligé de raisonner de façon constructive à partir de ceux-ci et la non-observation d’un fait de permet pas d’en déduire sa négation.Ainsi, les seuls éléments dont il faut tenir compte dans l’observation et l’étude de la nature sont les observations que l’on en fait et toute connaissance s’obtient de façon strictement déductive à partir de celles-ci

    A Semantic Theory of Abstractions

    No full text
    ions P. Pandurang Nayak Recom Technologies, NASA Ames Research Center, MS 269-2 Moffett Field, CA 94035. [email protected] Alon Y. Levy AT&T Bell Laboratories AI Principles Research Department 600 Mountain Avenue, Room 2C-406 Murray Hill, NJ 07974. [email protected] Abstract In this paper we present a semantic theory of abstractions based on viewing abstractions as model level mappings. This theory captures important aspects of abstractions not captured in the syntactic theory of abstractions presented by Giunchiglia and Walsh [ 1992 ] . Instead of viewing abstractions as syntactic mappings, we view abstraction as a two step process: first, the intended domain model is abstracted and then a set of (abstract) formulas is constructed to capture the abstracted domain model. Viewing and justifying abstractions as model level mappings is both natural and insightful. This basic theory yields abstractions that are weaker than the base theory. We show that abstractions that a..
    corecore