615 research outputs found

    Interpolative and extrapolative reasoning in propositional theories using qualitative knowledge about conceptual spaces

    Get PDF
    International audienceMany logical theories are incomplete, in the sense that non-trivial conclusions about particular situations cannot be derived from them using classical deduction. In this paper, we show how the ideas of interpolation and extrapolation, which are of crucial importance in many numerical domains, can be applied in symbolic settings to alleviate this issue in the case of propositional categorization rules. Our method is based on (mainly) qualitative descriptions of how different properties are conceptually related, where we identify conceptual relations between properties with spatial relations between regions in GĂ€rdenfors conceptual spaces. The approach is centred around the view that categorization rules can often be seen as approximations of linear (or at least monotonic) mappings between conceptual spaces. We use this assumption to justify that whenever the antecedents of a number of rules stand in a relationship that is invariant under linear (or monotonic) transformations, their consequents should also stand in that relationship. A form of interpolative and extrapolative reasoning can then be obtained by applying this idea to the relations of betweenness and parallelism respectively. After discussing these ideas at the semantic level, we introduce a number of inference rules to characterize interpolative and extrapolative reasoning at the syntactic level, and show their soundness and completeness w.r.t. the proposed semantics. Finally, we show that the considered inference problems are PSPACE-hard in general, while implementations in polynomial time are possible under some relatively mild assumptions

    Completing rule bases in symbolic domains by analogy making

    Full text link
    The paper considers the problem of completing a set of parallel if-then rules that provides a partial description of how a conclusion variable depends on the values of condition variables, where each variable takes its value among a finite ordered set of labels. The proposed approach does not require the use of fuzzy sets for the interpretation of these labels or for defining similarity measures, but rather relies on the extrapolation of missing rules on the basis of analogical proportions that hold for each variable between the labels of several parallel rules. The analogical proportions are evaluated for binary and multiple-valued variables on the basis of a logical expression involving lukasiewicz implication. The underlying assumption is that the mapping partially specified by the given rules is as regular as suggested by these rules. A comparative discussion with other approaches is presented. © 2011. The authors-Published by Atlantis Press

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    Interpolative reasoning with default rules

    Get PDF
    National audienceDefault reasoning and interpolation are two important forms of commonsense rule-based reasoning. The former allows us to draw conclusions from incompletely specified states, by making assumptions on normality, whereas the latter allows us to draw conclusions from states that are not explicitly covered by any of the available rules. Although both approaches have received considerable attention in the literature, it is at present not well understood how they can be combined to draw reasonable conclusions from incompletely specified states and incomplete rule bases. In this paper, we introduce an inference system for interpolating default rules, based on a geometric semantics in which normality is related to spatial density and interpolation is related to geometric betweenness. We view default rules and information on the betweenness of natural categories as particular types of constraints on qualitative representations of GÀrdenfors conceptual spaces. We propose an axiomatization, extending the well-known System P, and show its soundness and completeness w.r.t. the proposed semantics. Subsequently, we explore how our extension of preferential reasoning can be further refined by adapting two classical approaches for handling the irrelevance problem in default reasoning: rational closure and conditional entailment

    Dynamic Fuzzy Rule Interpolation

    Get PDF

    Integrating ontologies and vector space embeddings using conceptual spaces

    Get PDF
    Ontologies and vector space embeddings are among the most popular frameworks for encoding conceptual knowledge. Ontologies excel at capturing the logical dependencies between concepts in a precise and clearly defined way. Vector space embeddings excel at modelling similarity and analogy. Given these complementary strengths, there is a clear need for frameworks that can combine the best of both worlds. In this paper, we present an overview of our recent work in this area. We first discuss the theory of conceptual spaces, which was proposed in the 1990s by GĂ€rdenfors as an intermediate representation layer in between embeddings and symbolic knowledge bases. We particularly focus on a number of recent strategies for learning conceptual space representations from data. Next, building on the idea of conceptual spaces, we discuss approaches where relational knowledge is modelled in terms of geometric constraints. Such approaches aim at a tight integration of symbolic and geometric representations, which unfortunately comes with a number of limitations. For this reason, we finally also discuss methods in which similarity, and other forms of conceptual relatedness, are derived from vector space embeddings and subsequently used to support flexible forms of reasoning with ontologies, thus enabling a looser integration between embeddings and symbolic knowledge

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Data-Driven Representation Learning in Multimodal Feature Fusion

    Get PDF
    abstract: Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance. This dissertation focuses on the representation learning approaches as the fusion strategy. Specifically, the objective is to learn the shared latent representation which jointly exploit the structural information encoded in all modalities, such that a straightforward learning model can be adopted to obtain the prediction. We first consider sensor fusion, a typical multimodal fusion problem critical to building a pervasive computing platform. A systematic fusion technique is described to support both multiple sensors and descriptors for activity recognition. Targeted to learn the optimal combination of kernels, Multiple Kernel Learning (MKL) algorithms have been successfully applied to numerous fusion problems in computer vision etc. Utilizing the MKL formulation, next we describe an auto-context algorithm for learning image context via the fusion with low-level descriptors. Furthermore, a principled fusion algorithm using deep learning to optimize kernel machines is developed. By bridging deep architectures with kernel optimization, this approach leverages the benefits of both paradigms and is applied to a wide variety of fusion problems. In many real-world applications, the modalities exhibit highly specific data structures, such as time sequences and graphs, and consequently, special design of the learning architecture is needed. In order to improve the temporal modeling for multivariate sequences, we developed two architectures centered around attention models. A novel clinical time series analysis model is proposed for several critical problems in healthcare. Another model coupled with triplet ranking loss as metric learning framework is described to better solve speaker diarization. Compared to state-of-the-art recurrent networks, these attention-based multivariate analysis tools achieve improved performance while having a lower computational complexity. Finally, in order to perform community detection on multilayer graphs, a fusion algorithm is described to derive node embedding from word embedding techniques and also exploit the complementary relational information contained in each layer of the graph.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    • 

    corecore