448 research outputs found
Learning in Description Logics with Fuzzy Concrete Domains
Description Logics (DLs) are a family of logic-based Knowledge Representation (KR) formalisms, which are particularly suitable for representing incomplete yet precise structured knowledge.
Several fuzzy extensions of DLs have been proposed in the KR field in order to handle imprecise knowledge which is particularly pervading in those domains where entities could be better described in natural language. Among the many approaches to fuzzification in DLs, a simple yet interesting one involves the use of fuzzy concrete domains. In this paper, we present a method for learning within the KR framework of fuzzy DLs. The method induces fuzzy DL inclusion axioms from any crisp DL knowledge base. Notably, the induced axioms may contain fuzzy concepts automatically generated from numerical concrete domains during the learning process. We discuss the results obtained on a popular learning problem in comparison with state-of-the-art DL learning algorithms, and on a test bed in order to evaluate the classification performance
Inductive Logic Programming in Databases: from Datalog to DL+log
In this paper we address an issue that has been brought to the attention of
the database community with the advent of the Semantic Web, i.e. the issue of
how ontologies (and semantics conveyed by them) can help solving typical
database problems, through a better understanding of KR aspects related to
databases. In particular, we investigate this issue from the ILP perspective by
considering two database problems, (i) the definition of views and (ii) the
definition of constraints, for a database whose schema is represented also by
means of an ontology. Both can be reformulated as ILP problems and can benefit
from the expressive and deductive power of the KR framework DL+log. We
illustrate the application scenarios by means of examples. Keywords: Inductive
Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid
Knowledge Representation and Reasoning Systems. Note: To appear in Theory and
Practice of Logic Programming (TPLP).Comment: 30 pages, 3 figures, 2 tables
Towards unsupervised ontology learning from data
Data-driven elicitation of ontologies from structured data is a well-recognized knowledge acquisition bottleneck. The development of efficient techniques for (semi-)automating this task is therefore practically vital - yet, hindered by the lack of robust theoretical foundations. In this paper, we study the problem of learning Description Logic TBoxes from interpretations, which naturally translates to the task of ontology learning from data.In the presented framework, the learner is provided with a set of positive interpretations (i.e., logical models) of the TBox adopted by the teacher. The goal is to correctly identify the TBox given this input. We characterize the key constraints on the models that warrant finite learnability of TBoxes expressed in selected fragments of the Description Logic ε λ and define corresponding learning algorithms.This work was funded in part by the National Research Foundation under Grant no. 85482
Empowering Knowledge Bases: a Machine Learning Perspective
The construction of Knowledge Bases requires quite often
the intervention of knowledge engineering and domain experts, resulting
in a time consuming task. Alternative approaches have been developed
for building knowledge bases from existing sources of information such
as web pages and crowdsourcing; seminal examples are NELL, DBPedia,
YAGO and several others. With the goal of building very large sources of
knowledge, as recently for the case of Knowledge Graphs, even more complex
integration processes have been set up, involving multiple sources of
information, human expert intervention, crowdsourcing. Despite signi -
cant e orts for making Knowledge Graphs as comprehensive and reliable
as possible, they tend to su er of incompleteness and noise, due to the
complex building process. Nevertheless, even for highly human curated
knowledge bases, cases of incompleteness can be found, for instance with
disjointness axioms missing quite often. Machine learning methods have
been proposed with the purpose of re ning, enriching, completing and
possibly raising potential issues in existing knowledge bases while showing
the ability to cope with noise. The talk will concentrate on classes
of mostly symbol-based machine learning methods, speci cally focusing
on concept learning, rule learning and disjointness axioms learning problems,
showing how the developed methods can be exploited for enriching
existing knowledge bases. During the talk it will be highlighted as, a
key element of the illustrated solutions, is represented by the integration
of: background knowledge, deductive reasoning and the evidence coming
from the mass of the data. The last part of the talk will be devoted
to the presentation of an approach for injecting background knowledge
into numeric-based embedding models to be used for predictive tasks on
Knowledge Graphs
- …