2 research outputs found

    Explaining Deep Learning Hidden Neuron Activations using Concept Induction

    Full text link
    One of the current key challenges in Explainable AI is in correctly interpreting activations of hidden neurons. It seems evident that accurate interpretations thereof would provide insights into the question what a deep learning system has internally \emph{detected} as relevant on the input, thus lifting some of the black box character of deep learning systems. The state of the art on this front indicates that hidden node activations appear to be interpretable in a way that makes sense to humans, at least in some cases. Yet, systematic automated methods that would be able to first hypothesize an interpretation of hidden neuron activations, and then verify it, are mostly missing. In this paper, we provide such a method and demonstrate that it provides meaningful interpretations. It is based on using large-scale background knowledge -- a class hierarchy of approx. 2 million classes curated from the Wikipedia Concept Hierarchy -- together with a symbolic reasoning approach called \emph{concept induction} based on description logics that was originally developed for applications in the Semantic Web field. Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network through a hypothesis and verification process.Comment: Submitted to IJCAI-2

    Modular ontology modeling meets upper ontologies: the upper ontology alignment tool

    Get PDF
    Master of ScienceDepartment of Computer SciencePascal HitzlerOntology modeling has become a primary approach to schema generation for data integration and knowledge graphs in many application areas. The quest for efficient approaches to model useful and re-useable ontologies has led to different ontology creation proposals over the years. The project focuses on two major approaches, modeling using a top-level ontology, and the other is modular ontology modeling. The traditional approach is based on top-level ontology, and the strategy is to utilize ontology that is comprehensive enough to cover a broad spectrum of domains through their universal terminologies. In this way, all domain ontologies share a common top-level formal ontology in which their respective root nodes can be defined, and hence consistency is assured across the knowledge graph. Nevertheless, the most recent approach is quite different and is a refinement of the eXtreme Ontology Design methodology based on the ontology design patterns. Whole ontology is viewed as a collection of interconnected modules, and modules are developed around the classified fundamental notions according to experts' terminology or the use-case. Having developed modules in a fashion of divide and conquer, these modules are shareable and reusable among some other ontology if needed, and consequently, the ontology being FAIR is justified (findable, accessible, interoperable, and reusable). Although, it has been argued that there are advantages to either paradigm, it is possible to have a combination of both approaches mentioned earlier, depending upon the use-case or the preferences of the ontology engineers. We provide an extension to the Protégé - based modular ontology engineering tool CoModIDE, in order to make it possible for ontology engineers to follow traditional, ad-hoc ontology modeling approach, alongside more modern paradigms such as modular ontology engineering. The project focuses on domain-level ontology developers or organizations dealing with ontology development, which may get help through the plugin in minimizing the tooling gap to unite paradigms and develop robust, flexible ontologies suitable to their needs. As a bridge between the more recently proposed modular ontology modeling approach and more classical ones based on foundational ontologies, it enables a best-of-both-worlds approach for ontology engineering
    corecore