28 research outputs found
Inference Rules in some temporal multi-epistemic propositional logics
Multi-modal logics are among the best tools developed so far to analyse
human reasoning and agents’ interactions. Recently multi-modal
logics have found several applications in Artificial Intelligence (AI) and
Computer Science (CS) in the attempt to formalise reasoning about
the behavior of programs. Modal logics deal with sentences that are
qualified by modalities. A modality is any word that could be added to
a statement p to modify its mode of truth. Temporal logics are obtained
by joining tense operators to the classical propositional calculus, giving
rise to a language very effective to describe the flow of time. Epistemic
logics are suitable to formalize reasoning about agents possessing a
certain knowledge. Combinations of temporal and epistemic logics are
particularly effective in describing the interaction of agents through the
flow of time. Although not yet fully investigated, this approach has
found many fruitful applications. These are concerned with the development
of systems modelling reasoning about knowledge and space,
reasoning under uncertainty, multi-agent reasoning et c.
Despite their power, multi modal languages cannot handle a changing
environment. But this is exactly what is required in the case of human
reasoning, computation and multi-agent environment. For this purpose,
inference rules are a core instrument. So far, the research in this
field has investigated many modal and superintuitionistic logics. However,
for the case of multi-modal logics, not much is known concerning
admissible inference rules.
In our research we extend the investigation to some multi-modal propositional logics which combine tense and knowledge modalities. As far
as we are concerned, these systems have never been investigated before.
In particular we start by defining our systems semantically; further we
prove such systems to enjoy the effective finite model property and to
be decidable with respect to their admissible inference rules. We turn
then our attention to the syntactical side and we provide sound and
complete axiomatic systems. We conclude our dissertation by introducing
the reader to the piece of research we are currently working on.
Our original results can be found in [9, 4, 11] (see Appendix A). They
have also been presented by the author at some international conferences
and schools (see [8, 10, 5, 7, 6] and refer to Appendix B for more
details).
Our project concerns philosophy, mathematics, AI and CS. Modern
applications of logic in CS and AI often require languages able to represent
knowledge about dynamic systems. Multi-modal logics serve
these applications in a very efficient way, and we would absorb and
develop some of these techniques to represent logical consequences in
artificial intelligence and computation
Decidability of Multi-modal Logic LTK of Linear Time and Knowledge
Thepaperinvestigates modal(temporal-model) logicsbased at a semantic approach with models combining knowledge and time. We introduce multi-modal logics LT Kr and LT Kir containing modalities for knowledge and time as the sets of all LT Kr -valid, and LT Kir -valid formulae for a class of special LT Kr frames, LT Kir -frames, respectively. The main results of this paper are theorems stating that LT Kr and LT Kir are decidable; we also give an explicit solving algorithm.В представленной статье используется семантический подход к построению моделей, комбинирующихмодальности знания ивремени.Семантическивводятсямногомодальныелогики LT Kr и LT Kir , содержащие модальности знания и времени как множество формул, истинных на фреймах специального вида. Главным результатом работы являются теоремы об эффективной финитной аппроксимируемости и, как следствие, разрешимости данных логик
Unification in Linear Modal Logic on Non-transitive Time with the Universal Modality
We investigate the question of unification in the linear modal logic on non-transitive time with the universal modality. The semantic construction of logic on linear non-transitive Kripke frames is proposed, effective definability and projectivity of the unifiable formulas are proved. An algorithm for construction the most general unifier is found
Unification and Finite Model Property for Linear Step-Like Temporal Multi-Agent Logic with the Universal Modality
This paper proposes a semantic description of the linear step-like temporal multi-agent logic with the universal modality based on the idea of non-reflexive non-transitive nature of time. We proved a finite model property and projective unification for this logic
Combining reinforcement learning and optimal control for the control of nonlinear dynamical systems
This thesis presents a novel hierarchical learning framework, Reinforcement Learning Optimal Control,
for controlling nonlinear dynamical systems with continuous states and actions. The adapted approach
mimics the neural computations that allow our brain to bridge across the divide between symbolic
action-selection and low-level actuation control by operating at two levels of abstraction. First, current
findings demonstrate that at the level of limb coordination human behaviour is explained by linear
optimal feedback control theory, where cost functions match energy and timing constraints of tasks.
Second, humans learn cognitive tasks involving learning symbolic level action selection, in terms of
both model-free and model-based reinforcement learning algorithms. We postulate that the ease with
which humans learn complex nonlinear tasks arises from combining these two levels of abstraction.
The Reinforcement Learning Optimal Control framework learns the local task dynamics from naive
experience using an expectation maximization algorithm for estimation of linear dynamical systems
and forms locally optimal Linear Quadratic Regulators, producing continuous low-level control. A
high-level reinforcement learning agent uses these available controllers as actions and learns how to
combine them in state space, while maximizing a long term reward. The optimal control costs form
training signals for high-level symbolic learner. The algorithm demonstrates that a small number of
locally optimal linear controllers can be combined in a smart way to solve global nonlinear control
problems and forms a proof-of-principle to how the brain may bridge the divide between low-level
continuous control and high-level symbolic action selection. It competes in terms of computational
cost and solution quality with state-of-the-art control, which is illustrated with solutions to benchmark
problems.Open Acces
Non-normal modal logics, quantification, and deontic dilemmas. A study in multi-relational semantics
This dissertation is devoted to the study of non-normal (modal) systems for deontic logics, both on the propositional level, and on the first order one. In particular we developed our study the Multi-relational setting that generalises standard Kripke Semantics. We present new completeness results concerning the semantic setting of several systems which are able to handle normative dilemmas and conflicts. Although primarily driven by issues related to the legal and moral field, these results are also relevant for the more theoretical field of Modal Logic itself, as we propose a syntactical, and semantic study of intermediate systems between the classical propositional calculus CPC and the minimal normal modal logic K
Metasemantics and fuzzy mathematics
The present thesis is an inquiry into the metasemantics of natural languages, with a particular focus on the philosophical motivations for countenancing degreed formal frameworks for both psychosemantics and truth-conditional semantics. Chapter 1 sets out to offer a bird's eye view of our overall research project and the key questions that we set out to address. Chapter 2 provides a self-contained overview of the main empirical findings in the cognitive science of concepts and categorisation. This scientific background is offered in light of the fact that most variants of psychologically-informed semantics see our network of concepts as providing the raw materials on which lexical and sentential meanings supervene. Consequently, the metaphysical study of internalistically-construed meanings and the empirical study of our mental categories are overlapping research projects. Chapter 3 closely investigates a selection of species of conceptual semantics, together with reasons for adopting or disavowing them. We note that our ultimate aim is not to defend these perspectives on the study of meaning, but to argue that the project of making them formally precise naturally invites the adoption of degreed mathematical frameworks (e.g. probabilistic or fuzzy). In Chapter 4, we switch to the orthodox framework of truth-conditional semantics, and we present the limitations of a philosophical position that we call "classicism about vagueness". In the process, we come up with an empirical hypothesis for the psychological pull of the inductive soritical premiss and we make an original objection against the epistemicist position, based on computability theory. Chapter 5 makes a different case for the adoption of degreed semantic frameworks, based on their (quasi-)superior treatments of the paradoxes of vagueness. Hence, the adoption of tools that allow for graded membership are well-motivated under both semantic internalism and semantic externalism. At the end of this chapter, we defend an unexplored view of vagueness that we call "practical fuzzicism". Chapter 6, viz. the final chapter, is a metamathematical enquiry into both the fuzzy model-theoretic semantics and the fuzzy Davidsonian semantics for formal languages of type-free truth in which precise truth-predications can be expressed
Admissible inference rules in the linear logic of knowledge and time LTK
The paper investigates admissible inference rules for the multi-modal logic LTK, which describes a combination of linear time and knowledge. This logic is semantically defined as the set of all LTK-valid formulae, where LTK-frames are multi-modal Kripke-frames combining a linear and discrete representation of the flow of time with special S5-like modalities, defined at each time cluster and representing knowledge. We start by revising the effective finite model property in this particular case, while the central part of the paper is devoted to constructing special n-characterising models for LTK. Such structeres allow us to find an algorithm determining admissible inference rules in LTK; the main result of this work is that LTK is decidable with respect to inference rules. \ua9 Copyright 2006 Oxford University Press
Reescritura de términos y sustituciones explícitas
La operación de sustitución constituye un engranaje básico en los fundamentos de la teoría de lenguajes de programación. Juega un rol central en el lambda cálculo (por ende, en lenguajes de programación funcional), en unificación de primer orden y de orden superior (por ende, en lenguajes de programación basados en el paradigma lógico), en modalidades de pasaje de parámetros (por ende, en lenguajes de programación imperativos), etc. Recientemente, investigadores en informática se han interesado en el pasaje de la noción usual de la sustitución, atómica, y de gruesa granularidad, hacia una noción más refinada, de más fina granularidad. La noción de sustitución es transportada del metalenguaje (nuestro lenguaje de discurso) al lenguaje objeto (nuestro lenguaje de estudio). Como consecuencia de ello se obtienen los llamados cálculos de sustituciones explícitas. Estos son de sumo interés a la hora de estudiar la interpretación operacional de los formalismos en cuestión y constituyen los objetos de interés de esta tesis. Se desarrollan los siguientes tres ejes de estudio: Primero, se consideran estrategias de reescritura perpetuas en lambda cálculos con sustituciones explícitas. Estas son estrategias de reescritura que preservan la posibilidad de reducciones infinitas. Se propone una caracterización inductiva del conjunto de términos que no poseen reducciones infinitas (los llamados fuertemente normalizantes). Un lambda cálculo polimórfico con sustituciones explícitas también es analizado, incluyendo propiedades tales como subject reduction y normalización fuerte. Segundo, colocamos el ς-cálculo de M. Abadi and L. Cardelli enriquecido con sustituciones explícitas bajo el microscopio. Este cálculo se encuentra en un nivel semejante de abstracción al lambda cálculo pero se basa en objetos en lugar de funciones. Propiedades tales como simulación del lambda cálculo, confluencia y preservación de la normalización fuerte (aquellos términos que son fuertemente normalizantes en ς también lo son en ς con sustituciones explícitas) son consideradas. Finalmente, dirigimos nuestra atención a la tarea de relacionar la reescritura de orden superior con aquella de primer orden. Fijamos una variante de los ERS (apodados SERSdb) de Z. Khasidashvili como nuestro formalismo de orden superior de partida y definimos un proceso de conversión que permite codificar cualquier SERSdb como un sistema de reescritura de primer orden. En este último, cada paso de reescritura se lleva a cabo módulo una teoría ecuacional determinada por un cálculo de sustituciones explícitas. La misma se formula de manera genérica a través de una presentación de cálculos de sustituciones explícitas basada en macros y axiomas sobre estas macros, parametrizando de esta manera al procedimiento de conversión sobre cualquier cálculo de sustituciones explícitas que obedece la presentación basada en macros. El procedimiento de conversión se encarga de codificar pattern matching de orden superior y sustitución en el entorno de reescritura de primer orden. Asimismo, propiedades que relacionan la noción de reescritura en el orden superior con aquella de primer orden son analizadas en detalle. Se identifica una clase de SERSdb para los cuales el sistema de primer orden resultante de su conversión no requiere una teoría ecuacional para implementar pattern matching de orden superior, bastando para ello matching sintáctico. También se argumenta que esta clase de sistemas de orden superior es apropiada para transferir resultados del entorno de reescritura de orden superior a aquella de primer orden. A modo de ejemplo no-trivial de ello, estudiamos la transferencia del teorema de standarización (fuerte).Substitution spans many areas in programming language theory. It plays a central role in the lambda calculus (hence functional programming), in first and higher-order unikation (hence logic programming), parameter passing methods (hence imperative programming), etc. Recently researchers became interested in shifting from the usual atomic, coarse grained view of substitution to a more refined, fine grained one. Substitution is promoted from the metalevel (our language of discourse) to the object-level (our language of study). This is interesting when studying the operational interpretation of the formalisms in question. Calculi of object-level or explicit substitution is the concern of this thesis. The following three study axes are developed. First we consider perpetual rewrite strategies in lambda calculi of explicit substitutions. These are rewrite strategies that preserve the possibility of inhite derivations. Also, we study how to characterize inductively the set of terms that do not possess infinite derivations (the strongly normalizing terms). Polymorphic lambda calculus with explicit substitutions shall receive our attention too, including properties such as subject reduction and strong normalization. Secondly, we put the ς-calculus of M.Abadi and L.Cardelli augmented with explicit substitutions under the microscope. This calculus is at the level of the lambda calculus but is based on objects instead of functions. Properties such as simulation of the lambda calculus, confluence and preservation of strong normalization (terms which are strongly normalizing in ς are also strongly normalizii in ς with explicit substitutions) are considered. Finally, we address the task of reducing higher-order rewriting to first-order rewriting. We fix a variant of Z-Khasidashvili's ERS (dubbed SERSdb) as our departing formalism and provide a conversion procedure to encode any ERS as a first-order rewrite system in which a rewrite step takes place modulo an equational theory determined by a calculus of explicit substitutions. The latter is achieved with the aid of a macro-based presentation of calculi of explicit substitutions, thus parametrizing the conversion procedure over any calculus of explicit substitutions in compliance with the aforementioned presentation. The conversion procedure is in charge of encoding higherorder pattern matching and substitution in the first-order framework. Properties relating the rewrite relation in the higher-order framework and that of the resulting first-order system are studied in detail. We then identify a class of SERSdb for which the resulting first-order system does not require the equational theory to implement higher-order pattern matching, thus contenting itself with syntactic matching. It is argued that this class of systems is appropriate for transferring results from the first-order framework to the higher-order one. As a non-trivial example we study the transfer of the (strong) standardization theorem.Fil:Bonelli, Eduardo. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina