29 research outputs found

    Bridging the Semantic Gap between Sensor Data and Ontological Knowledge

    No full text
    The rapid growth of sensor data can potentially enable a better awareness of the environment for humans. In this regard, interpretation of data needs to be human-understandable. For this, data interpretation may include semantic annotations that hold the meaning of numeric data. This thesis is about bridging the gap between quantitative data and qualitative knowledge to enrich the interpretation of data. There are a number of challenges which make the automation of the interpretation process non-trivial. Challenges include the complexity of sensor data, the amount of available structured knowledge and the inherent uncertainty in data. Under the premise that high level knowledge is contained in ontologies, this thesis investigates the use of current techniques in ontological knowledge representation and reasoning to confront these challenges. Our research is divided into three phases, where the focus of the first phase is on the interpretation of data for domains which are semantically poor in terms of available structured knowledge. During the second phase, we studied publicly available ontological knowledge for the task of annotating multivariate data. Our contribution in this phase is about applying a diagnostic reasoning algorithm to available ontologies. Our studies during the last phase have been focused on the design and development of a domain-independent ontological representation model equipped with a non-monotonic reasoning approach with the purpose of annotating time-series data. Our last contribution is related to coupling the OWL-DL ontology with a non-monotonic reasoner. The experimental platforms used for validation consist of a network of sensors which include gas sensors whose generated data is complex. A secondary data set includes time series medical signals representing physiological data, as well as a number of publicly available ontologies such as NCBO Bioportal repository

    Semantic Analysis Of Multi Meaning Words Using Machine Learning And Knowledge Representation

    No full text
    The present thesis addresses machine learning in a domain of naturallanguage phrases that are names of universities. It describes two approaches to this problem and a software implementation that has made it possible to evaluate them and to compare them. In general terms, the system's task is to learn to 'understand' the significance of the various components of a university name, such as the city or region where the university is located, the scienti c disciplines that are studied there, or the name of a famous person which may be part of the university name. A concrete test for whether the system has acquired this understanding is when it is able to compose a plausible university name given some components that should occur in the name. In order to achieve this capability, our system learns the structure of available names of some universities in a given data set, i.e. it acquires a grammar for the microlanguage of university names. One of the challenges is that the system may encounter ambiguities due to multi meaning words. This problem is addressed using a small ontology that is created during the training phase. Both domain knowledge and grammatical knowledge is represented using decision trees, which is an ecient method for concept learning. Besides for inductive inference, their role is to partition the data set into a hierarchical structure which is used for resolving ambiguities. The present report also de nes some modi cations in the de nitions of parameters, for example a parameter for entropy, which enable the system to deal with cognitive uncertainties. Our method for automatic syntax acquisition, ADIOS, is an unsupervised learning method. This method is described and discussed here, including a report on the outcome of the tests using our data set. The software that has been implemented and used in this project has been implemented in C

    Perceiving and acting out of the box

    No full text
    This paper discusses potential limitations in learning in au-tonomous robotic systems that integrate several specialized subsystemsworking at different levels of abstraction. If the designers have antici-pated what the system may have to learn, then adding new knowledgeboils down to adding new entries in a database and/or tuning parametersof some subsystem(s). But if this new knowledge does not fit in prede-fined structures, the system can simply not acquire it, hence it cannot“think out of the box” designed by its creators. We show why learningout of the box may be difficult in integrated systems, hint at some exist-ing potential approaches, and finally suggest that a better approach maycome by looking at constructivist epistemology, with focus on Piaget’sschemas theory

    SmartEnv as a Network of Ontology Patterns

    No full text
    In this article we outline the details of an ontology, called SmartEnv, proposed as a representational model to assist the development process of smart (i.e., sensorized) environments. The SmartEnv ontology is described in terms of its modules representing different aspects including physical and conceptual aspects of a smart environment. We propose the use of the Ontology Design Pattern (ODP) paradigm in order to modularize our proposed solution, while at the same time avoiding strong dependencies between the modules in order to manage the representational complexity of the ontology. The ODP paradigm and related methodologies enable incremental construction of ontologies by first creating and then linking small modules. Most modules (patterns) of the SmartEnv ontology are inspired by, and aligned with, the Semantic Sensor Network (SSN) ontology, however with extra interlinks to provide further precision and cover more representational aspects. The result is a network of 8 ontology patterns together forming a generic representation for a smart environment. The patterns have been submitted to the ODP portal and are available on-line at stable URIs

    Metrics and Evaluations of Time Series Explanations : An Application in Affect Computing

    No full text
    Explainable artificial intelligence (XAI) has shed light on enormous applications by clarifying why neural models make specific decisions. However, it remains challenging to measure how sensitive XAI solutions are to the explanations of neural models. Although different evaluation metrics have been proposed to measure sensitivity, the main focus has been on the visual and textual data. There is insufficient attention devoted to the sensitivity metrics tailored for time series data. In this paper, we formulate several metrics, including max short-term sensitivity (MSS) , max long-term sensitivity (MLS) , average short-term sensitivity (ASS) and average long-term sensitivity (ALS) , that target the sensitivity of XAI models with respect to the generated and real time series. Our hypothesis is that for close series with the same labels, we obtain similar explanations. We evaluate three XAI models, LIME, integrated gradient (IG), and SmoothGrad (SG), on CN-Waterfall, a deep convolutional network. This network is a highly accurate time series classifier in affect computing. Our experiments rely on data- , metric- and XAI hyperparameter- related settings on the WESAD and MAHNOB-HCI datasets. The results reveal that (i) IG and LIME provide a lower sensitivity scale than SG in all the metrics and settings, potentially due to the lower scale of important scores generated by IG and LIME, (ii) the XAI models show higher sensitivities for a smaller window of data, (iii) the sensitivities of XAI models fluctuate when the network parameters and data properties change, and (iv) the XAI models provide unstable sensitivities under different settings of hyperparameters

    Decision Explanation: Applying Contextual Importance and Contextual Utility in Affect Detection

    No full text
    Explainable AI has recently paved the way to justify decisions made by black-box models in various areas. However, a mature body of work in the field of affect detection is still limited. In this work, we evaluate a black-box outcome explanation for understanding humans’ affective states. We employ two concepts of Contextual Importance (CI) and Contextual Utility (CU), emphasizing on a context-aware decision explanation of a non-linear model, mainly a neural network. The neural model is designed to detect the individual mental states measured by wearable sensors to monitor the human user’s well-being. We conduct our experiments and outcome explanation on WESAD and MAHNOBHCI, as multimodal affect computing datasets. The results reveal that in the first experiment the electrodermal activity, respiration as well as accelorometer and in the second experiment the electrocardiogram and respiration signals contribute significantly in the classification task of mental states for a specific participant. To the best ofour knowledge, this is the first study leveraging the CI and CU concepts in outcome explanation of an affect detection model.Peer reviewe

    CN-waterfall

    No full text
    Funding Information: Open access funding provided by Umea University. This research was funded by Umeå University. Additionally, this work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Publisher Copyright: © 2021, The Author(s).Affective computing solutions, in the literature, mainly rely on machine learning methods designed to accurately detect human affective states. Nevertheless, many of the proposed methods are based on handcrafted features, requiring sufficient expert knowledge in the realm of signal processing. With the advent of deep learning methods, attention has turned toward reduced feature engineering and more end-to-end machine learning. However, most of the proposed models rely on late fusion in a multimodal context. Meanwhile, addressing interrelations between modalities for intermediate-level data representation has been largely neglected. In this paper, we propose a novel deep convolutional neural network, called CN-Waterfall, consisting of two modules: Base and General. While the Base module focuses on the low-level representation of data from each single modality, the General module provides further information, indicating relations between modalities in the intermediate- and high-level data representations. The latter module has been designed based on theoretically grounded concepts in the Explainable AI (XAI) domain, consisting of four different fusions. These fusions are mainly tailored to correlation- and non-correlation-based modalities. To validate our model, we conduct an exhaustive experiment on WESAD and MAHNOB-HCI, two publicly and academically available datasets in the context of multimodal affective computing. We demonstrate that our proposed model significantly improves the performance of physiological-based multimodal affect detection.Peer reviewe
    corecore