281 research outputs found
Termination of Narrowing: Automated Proofs and Modularity Properties
En 1936 Alan Turing demostro que el halting problem, esto es, el problema de decidir
si un programa termina o no, es un problema indecidible para la inmensa mayoria de
los lenguajes de programacion. A pesar de ello, la terminacion es un problema tan
relevante que en las ultimas decadas un gran numero de tecnicas han sido desarrolladas
para demostrar la terminacion de forma automatica de la maxima cantidad posible de
programas. Los sistemas de reescritura de terminos proporcionan un marco teorico
abstracto perfecto para el estudio de la terminacion de programas. En este marco, la
evaluaci on de un t ermino consiste en la aplicacion no determinista de un conjunto de
reglas de reescritura.
El estrechamiento (narrowing) de terminos es una generalizacion de la reescritura
que proporciona un mecanismo de razonamiento automatico. Por ejemplo, dado un
conjunto de reglas que denan la suma y la multiplicacion, la reescritura permite calcular
expresiones aritmeticas, mientras que el estrechamiento permite resolver ecuaciones
con variables. Esta tesis constituye el primer estudio en profundidad de las
propiedades de terminacion del estrechamiento. Las contribuciones son las siguientes.
En primer lugar, se identican clases de sistemas en las que el estrechamiento tiene
un comportamiento bueno, en el sentido de que siempre termina. Muchos metodos
de razonamiento automatico, como el analisis de la semantica de lenguajes de programaci
on mediante operadores de punto jo, se benefician de esta caracterizacion.
En segundo lugar, se introduce un metodo automatico, basado en el marco teorico
de pares de dependencia, para demostrar la terminacion del estrechamiento en un
sistema particular. Nuestro metodo es, por primera vez, aplicable a cualquier clase
de sistemas.
En tercer lugar, se propone un nuevo metodo para estudiar la terminacion del
estrechamiento desde un termino particular, permitiendo el analisis de la terminacion
de lenguajes de programacion. El nuevo metodo generaliza losIborra López, J. (2010). Termination of Narrowing: Automated Proofs and Modularity Properties [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19251Palanci
Unification in the Description Logic EL
The Description Logic EL has recently drawn considerable attention since, on
the one hand, important inference problems such as the subsumption problem are
polynomial. On the other hand, EL is used to define large biomedical
ontologies. Unification in Description Logics has been proposed as a novel
inference service that can, for example, be used to detect redundancies in
ontologies. The main result of this paper is that unification in EL is
decidable. More precisely, EL-unification is NP-complete, and thus has the same
complexity as EL-matching. We also show that, w.r.t. the unification type, EL
is less well-behaved: it is of type zero, which in particular implies that
there are unification problems that have no finite complete set of unifiers.Comment: 31page
Pseudo-contractions as Gentle Repairs
Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
Role-Value Maps and General Concept Inclusions in the Description Logic FLâ‚€
We investigate the impact that general concept inclusions and role-value maps have on the complexity and decidability of reasoning in the Description Logic FLâ‚€. On the one hand, we give a more direct proof for ExpTimehardness of subsumption w.r.t. general concept inclusions in FLâ‚€. On the other hand, we determine restrictions on role-value maps that ensure decidability of subsumption, but we also show undecidability for the cases where these restrictions are not satisfied
Fuzzy Description Logics with General Concept Inclusions
Description logics (DLs) are used to represent knowledge of an application domain and provide standard reasoning services to infer consequences of this knowledge. However, classical DLs are not suited to represent vagueness in the description of the knowledge. We consider a combination of DLs and Fuzzy Logics to address this task. In particular, we consider the t-norm-based semantics for fuzzy DLs introduced by Hájek in 2005. Since then, many tableau algorithms have been developed for reasoning in fuzzy DLs. Another popular approach is to reduce fuzzy ontologies to classical ones and use existing highly optimized classical reasoners to deal with them. However, a systematic study of the computational complexity of the different reasoning problems is so far missing from the literature on fuzzy DLs. Recently, some of the developed tableau algorithms have been shown to be incorrect in the presence of general concept inclusion axioms (GCIs). In some fuzzy DLs, reasoning with GCIs has even turned out to be undecidable. This work provides a rigorous analysis of the boundary between decidable and undecidable reasoning problems in t-norm-based fuzzy DLs, in particular for GCIs. Existing undecidability proofs are extended to cover large classes of fuzzy DLs, and decidability is shown for most of the remaining logics considered here. Additionally, the computational complexity of reasoning in fuzzy DLs with semantics based on finite lattices is analyzed. For most decidability results, tight complexity bounds can be derived
Design and Implementation of the Andromeda Proof Assistant
Andromeda is an LCF-style proof assistant where the user builds derivable judgments by writing code in a meta-level programming language AML. The only trusted component of Andromeda is a minimalist nucleus (an implementation of the inference rules of an object-level type theory), which controls construction and decomposition of type-theoretic judgments.
Since the nucleus does not perform complex tasks like equality checking beyond syntactic equality, this responsibility is delegated to the user, who implements one or more equality checking procedures in the meta-language. The AML interpreter requests witnesses of equality from user code using the mechanism of algebraic operations and handlers. Dynamic checks in the nucleus guarantee that no invalid object-level derivations can be constructed.
To demonstrate the flexibility of this system structure, we implemented a nucleus consisting of dependent type theory with equality reflection. Equality reflection provides a very high level of expressiveness, as it allows the user to add new judgmental equalities, but it also destroys desirable meta-theoretic properties of type theory (such as decidability and strong normalization).
The power of effects and handlers in AML is demonstrated by a standard library that provides default algorithms for equality checking, computation of normal forms, and implicit argument filling. Users can extend these new algorithms by providing local "hints" or by completely replacing these algorithms for particular developments. We demonstrate the resulting system by showing how to axiomatize and compute with natural numbers, by axiomatizing the untyped lambda-calculus, and by implementing a simple automated system for managing a universe of types
- …