66 research outputs found

    Fuzzy ontology representation using OWL 2

    Get PDF
    AbstractThe need to deal with vague information in Semantic Web languages is rising in importance and, thus, calls for a standard way to represent such information. We may address this issue by either extending current Semantic Web languages to cope with vagueness, or by providing a procedure to represent such information within current standard languages and tools. In this work, we follow the latter approach, by identifying the syntactic differences that a fuzzy ontology language has to cope with, and by proposing a concrete methodology to represent fuzzy ontologies using OWL 2 annotation properties. We also report on some prototypical implementations: a plug-in to edit fuzzy ontologies using OWL 2 annotations and some parsers that translate fuzzy ontologies represented using our methodology into the languages supported by some reasoners

    A GPFCSP-Based Fuzzy XQuery Interpreter

    Get PDF
    Nowadays XQuery has become the strongest standard for querying XML data. However, most of the real world information is in the form of imprecise, vague, ambiguous, uncertain and incomplete values. That is why there is a need for a flexible query language in which users can formulate queries that arise from their own criteria. In this paper, we propose an implementation of the Fuzzy XQuery - an extension of the XQuery query language based on the fuzzy set theory. In particular, we provide priority, threshold and fuzzy expressions for handling flexible queries. In addition, we have implemented an interpreter for this language by using the GPFCSP concept in Java and eXist-db environment

    Differentiable Logics for Neural Network Training and Verification

    Full text link
    The rising popularity of neural networks (NNs) in recent years and their increasing prevalence in real-world applications have drawn attention to the importance of their verification. While verification is known to be computationally difficult theoretically, many techniques have been proposed for solving it in practice. It has been observed in the literature that by default neural networks rarely satisfy logical constraints that we want to verify. A good course of action is to train the given NN to satisfy said constraint prior to verifying them. This idea is sometimes referred to as continuous verification, referring to the loop between training and verification. Usually training with constraints is implemented by specifying a translation for a given formal logic language into loss functions. These loss functions are then used to train neural networks. Because for training purposes these functions need to be differentiable, these translations are called differentiable logics (DL). This raises several research questions. What kind of differentiable logics are possible? What difference does a specific choice of DL make in the context of continuous verification? What are the desirable criteria for a DL viewed from the point of view of the resulting loss function? In this extended abstract we will discuss and answer these questions.Comment: FOMLAS'22 pape

    Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning

    Full text link
    Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in Neuro-Symbolic systems. However, some differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning. In this paper, we reveal that this bias, named \textit{Implication Bias} is common in loss functions derived from fuzzy logic operators. Furthermore, we propose a simple yet effective method to transform the biased loss functions into \textit{Reduced Implication-bias Logic Loss (RILL)} to address the above problem. Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions, especially when the knowledge base is incomplete, and keeps more robust than the compared methods when labelled data is insufficient.Comment: ACML'2023 Journal Track(Accepted by Machine Learning Journal

    Minimalistic fuzzy ontology reasoning: An application to Building Information Modeling

    Get PDF
    This paper presents a minimalistic reasoning algorithm to solve imprecise instance retrieval in fuzzy ontologies with application to querying Building Information Models (BIMs)—a knowledge representation formalism used in the construction industry. Our proposal is based on a novel lossless reduction of fuzzy to crisp reasoning tasks, which can be processed by any Description Logics reasoner. We implemented the minimalistic reasoning algorithm and performed an empirical evaluation of its performance in several tasks: interoperation with classical reasoners (Hermit and TrOWL), initialization time (comparing TrOWL and a SPARQL engine), and use of different data structures (hash tables, databases, and programming interfaces). We show that our software can efficiently solve very expressive queries not available nowadays in regular or semantic BIMs tools
    corecore