7 research outputs found
Repairing Ontologies via Axiom Weakening.
Ontology engineering is a hard and error-prone task, in which
small changes may lead to errors, or even produce an inconsistent
ontology. As ontologies grow in size, the need for automated
methods for repairing inconsistencies while preserving
as much of the original knowledge as possible increases.
Most previous approaches to this task are based on removing
a few axioms from the ontology to regain consistency.
We propose a new method based on weakening these axioms
to make them less restrictive, employing the use of refinement
operators. We introduce the theoretical framework for
weakening DL ontologies, propose algorithms to repair ontologies
based on the framework, and provide an analysis of
the computational complexity. Through an empirical analysis
made over real-life ontologies, we show that our approach
preserves significantly more of the original knowledge of the
ontology than removing axioms
A Toothful of Concepts: Towards a Theory of Weighted Concept Combination
We introduce a family of operators to combine Description
Logic concepts. They aim to characterise complex concepts that apply
to instances that satisfy \enough" of the concept descriptions given. For
instance, an individual might not have any tusks, but still be considered
an elephant. To formalise the meaning of "enough", the operators take a
list of weighted concepts as arguments, and a certain threshold to be met.
We commence a study of the formal properties of these operators, and
study some variations. The intended applications concern the representation
of cognitive aspects of classication tasks: the interdependencies
among the attributes that dene a concept, the prototype of a concept,
and the typicality of the instances
Repairing Socially Aggregated Ontologies Using Axiom Weakening
Ontologies represent principled, formalised descriptions of agents’ conceptualisations of a domain. For a community of agents, these descriptions may differ among agents. We propose an aggregative view of the integration of ontologies based on Judgement Aggregation (JA). Agents may vote on statements of the ontologies, and we aim at constructing a collective, integrated ontology, that reflects the individual conceptualisations as much as possible. As several results in JA show, many attractive and widely used aggregation procedures are prone to return inconsistent collective ontologies. We propose to solve the possible inconsistencies in the collective ontology by applying suitable weakenings of axioms that cause inconsistencies
Pink panthers and toothless tigers: three problems in classification
Many aspects of how humans form and combine concepts are notoriously difficult to capture formally. In this paper, we focus on the representation of three particular such aspects, namely overexten- sion, underextension, and dominance. Inspired in part by the work of Hampton, we consider concepts as given through a prototype view, and by considering the interdependencies between the attributes that define a concept. To approach this formally, we employ a recently introduced family of operators that enrich Description Logic languages. These operators aim to characterise complex concepts by collecting those instances that apply, in a finely controlled way, to ‘enough’ of the concept’s defin- ing attributes. Here, the meaning of ‘enough’ is technically realised by accumulating weights of satisfied attributes and comparing with a given threshold that needs to be met
Coherence, similarity, and concept generalisation
We address the problem of analysing the joint coherence of a number of concepts with respect to a background ontology. To address this problem, we explore the applicability of Paul Thagard's computational theory of coherence, in combination with semantic similarity between concepts based on a generalisation operator. In particular, given the input concepts, our approach computes maximally coherent subsets of these concepts following Thagard's partitioning approach, whilst returning a number of possible generalisations of these concepts as justifi- cation of why these concepts cohere
Coherence, Similarity, and Concept Generalisation
We address the problem of analysing the joint coherence of a number of concepts with respect to a background ontology. To address this problem, we explore the applicability of Paul Thagard's computational theory of coherence, in combination with semantic similarity between concepts based on a generalisation operator. In particular, given the input concepts, our approach computes maximally coherent subsets of these concepts following Thagard's partitioning approach, whilst returning a number of possible generalisations of these concepts as justifi- cation of why these concepts cohere.Peer Reviewe