160 research outputs found
Information sharing among ideal agents
Multi-agent systems operating in complex domains crucially require agents to interact with each other. An important result of this interaction is that some of the private knowledge of the agents is being shared in the group of agents. This thesis investigates the theme of knowledge sharing from a theoretical point of view by means of the formal tools provided by modal logic.
More specifically this thesis addresses the following three points.
First, the case of hypercube systems, a special class of interpreted systems as defined by Halpern and colleagues, is analysed in full detail. It is here proven that the logic S5WDn constitutes a sound and complete axiomatisation for hypercube systems. This logic, an extension of the modal system S5n commonly used to represent knowledge of a multi-agent system, regulates how knowledge is being shared among agents modelled by hypercube systems. The logic S5WDn is proven to be decidable. Hypercube systems are proven to be synchronous agents with perfect recall that communicate only by broadcasting, in separate work jointly with Ron van der Meyden not fully reported in this thesis.
Second, it is argued that a full spectrum of degrees of knowledge sharing can be present in any multi-agent system, with no sharing and full sharing at the extremes. This theme is investigated axiomatically and a range of logics representing a particular class of knowledge sharing between two agents is presented. All the logics but two in this spectrum are proven complete by standard canonicity proofs. We conjecture that these two remaining logics are not canonical and it is an open problem whether or not they are complete.
Third, following a influential position paper by Halpern and Moses, the idea of refining and checking of knowledge structures in multi-agent systems is investigated. It is shown that, Kripke models, the standard semantic tools for this analysis are not adequate and an alternative notion, Kripke trees, is put forward. An algorithm for refining and checking Kripke trees is presented and its major properties investigated. The algorithm succeeds in solving the famous muddy-children puzzle, in which agents communicate and reason about each other's knowledge.
The thesis concludes by discussing the extent to which combining logics, a promising new area in pure logic, can provide a significant boost in research for epistemic and other theories for multi-agent systems
The Proscriptive Principle and Logics of Analytic Implication
The analogy between inference and mereological containment goes at least back to Aristotle, whose discussion in the Prior Analytics motivates the validity of the syllogism by way of talk of parts and wholes. On this picture, the application of syllogistic is merely the analysis of concepts, a term that presupposesâthrough the root áźÎ˝ÎŹ + ÎťĎĎ âa mereological background.
In the 1930s, such considerations led William T. Parry to attempt to codify this notion of logical containment in his system of analytic implication AI. Parryâs original system AI was later expanded to the system PAI. The hallmark of Parryâs systemsâand of what may be thought of as containment logics or Parry systems in generalâis a strong relevance property called the âProscriptive Principleâ (PP) described by Parry as the thesis that: No formula with analytic implication as main relation holds universally if it has a free variable occurring in the consequent but not the antecedent.
This type of proscription is on its face justified, as the presence of a novel parameter in the consequent corresponds to the introduction of new subject matter. The plausibility of the thesis that the content of a statement is related to its subject matter thus appears also to support the validity of the formal principle.
Primarily due to the perception that Parryâs formal systems were intended to accurately model Kantâs notion of an analytic judgment, Parryâs deductive systemsâand the suitability of the Proscriptive Principle in generalâwere met with severe criticism. While Anderson and Belnap argued that Parryâs criterion failed to account for a number of prima facie analytic judgments, othersâsuch as Sylvan and Bradyâargued that the utility of the criterion was impeded by its reliance on a âsyntacticalâ device.
But these arguments are restricted to Parryâs work qua exegesis of Kant and fail to take into account the breadth of applications in which the Proscriptive Principle emerges. It is the goal of the present work to explore themes related to deductive systems satisfying one form of the Proscriptive Principle or other, with a special emphasis placed on the rehabilitation of their study to some degree. The structure of the dissertation is as follows: In Chapter 2, we identify and develop the relationship between Parry-type deductive systems and the field of âlogics of nonsense.â Of particular importance is Dmitri Bochvarâs âinternalâ nonsense logic ÎŁ0, and we observe that two â˘-Parry subsystems of ÎŁ0 (Harry Deutschâs Sfde and Frederick Johnsonâs RC) can be considered to be the products of particular âstrategiesâ of eliminating problematic inferences from Bochvarâs system. The material of Chapter 3 considers Kit Fineâs program of state space semantics in the context of Parry logics. Recently, Fineâwho had already provided the first intuitive semantics for Parryâs PAIâhas offered a formal model of truthmaking (and falsemaking) that provides one of the first natural semantics for Richard B. Angellâs logic of analytic containment AC, itself a â˘-Parry system. After discussing the relationship between state space semantics and nonsense, we observe that Fabrice Correiaâs weaker frameworkâintroduced as a semantics for a containment logic weaker than ACâtacitly endorses an implausible feature of allowing hypernonsensical statements. By modelling Correiaâs containment logic within the stronger setting of Fineâs semantics, we are able to retain Correiaâs intuitions about factual equivalence without such a commitment. As a further application, we observe that Fineâs setting can resolve some ambiguities in Greg Restallâs own truthmaker semantics. In Chapter 4, we consider interpretations of disjunction that accord with the characteristic failure of Addition in which the evaluation of a disjunction A ⨠B requires not only the truth of one disjunct, but also that both disjuncts satisfy some further property. In the setting of computation, such an analysis requires the existence of some procedure tasked with ensuring the satisfaction of this property by both disjuncts. This observation leads to a computational analysis of the relationship between Parry logics and logics of nonsense in which the semantic category of ânonsenseâ is associated with catastrophic faults in computer programs. In this spirit, we examine semantics for several â˘-Parry logics in terms of the successful execution of certain types of programs and the consequences of extending this analysis to dynamic logic and constructive logic. Chapter 5 considers these faults in the particular case in which Nuel Belnapâs âartificial reasonerâ is unable to retrieve the value assigned to a variable. This leads not only to a natural interpretation of Graham Priestâs semantics for the â˘-Parry system Sâfde but also a novel, many-valued semantics for Angellâs AC, completeness of which is proven by establishing a correspondence with Correiaâs semantics for AC. These many-valued semantics have the additional benefit of allowing us to apply the material in Chapter 2 to the case of AC to define intensional extensions of AC in the spirit of Parryâs PAI. One particular instance of the type of disjunction central to Chapter 4 is Melvin Fittingâs cut-down disjunction. Chapter 6 examines cut-down operations in more detail and provides bilattice and trilattice semantics for the â˘-Parry systems Sfde and AC in the style of Ofer Arieli and Arnon Avronâs logical bilattices. The elegant connection between these systems and logical multilattices supports the fundamentality and naturalness of these logics and, additionally, allows us to extend epistemic interpretation of bilattices in the tradition of artificial intelligence to these systems. Finally, the correspondence between the present many-valued semantics for AC and those of Correia is revisited in Chapter 7. The technique that plays an essential role in Chapter 4 is used to characterize a wide class of first-degree calculi intermediate between AC and classical logic in Correiaâs setting. This correspondence allows the correction of an incorrect characterization of classical logic given by Correia and leads to the question of how to characterize hybrid systems extending Angellâs ACâ. Finally, we consider whether this correspondence aids in providing an interpretation to Correiaâs first semantics for AC
Existence Assumptions and Logical Principles: Choice Operators in Intuitionistic Logic
Hilbertâs choice operators Ď and Îľ, when added to intuitionistic logic, strengthen it. In the presence of certain extensionality axioms they produce classical logic, while in the presence of weaker decidability conditions for terms they produce various superintuitionistic intermediate logics. In this thesis, I argue that there are important philosophical lessons to be learned from these results. To make the case, I begin with a historical discussion situating the development of Hilbertâs operators in relation to his evolving program in the foundations of mathematics and in relation to philosophical motivations leading to the development of intuitionistic logic. This sets the stage for a brief description of the relevant part of Dummettâs program to recast debates in metaphysics, and in particular disputes about realism and anti-realism, as closely intertwined with issues in philosophical logic, with the acceptance of classical logic for a domain reflecting a commitment to realism for that domain. Then I review extant results about what is provable and what is not when one adds epsilon to intuitionistic logic, largely due to Bell and DeVidi, and I give several new proofs of intermediate logics from intuitionistic logic+Îľ without identity. With all this in hand, I turn to a discussion of the philosophical significance of choice operators. Among the conclusions I defend are that these results provide a finer-grained basis for Dummettâs contention that commitment to classically valid but intuitionistically invalid principles reflect metaphysical commitments by showing those principles to be derivable from certain existence assumptions; that Dummettâs framework is improved by these results as they show that questions of realism and anti-realism are not an âall or nothingâ matter, but that there are plausibly metaphysical stances between the poles of anti-realism (corresponding to acceptance just of intutionistic logic) and realism (corresponding to acceptance of classical logic), because different sorts of ontological assumptions yield intermediate rather than classical logic; and that these intermediate positions between classical and intuitionistic logic link up in interesting ways with our intuitions about issues of objectivity and reality, and do so usefully by linking to questions around intriguing everyday concepts such as âis smart,â which I suggest involve a number of distinct dimensions which might themselves be objective, but because of their multivalent structure are themselves intermediate between being objective and not. Finally, I discuss the implications of these results for ongoing debates about the status of arbitrary and ideal objects in the foundations of logic, showing among other things that much of the discussion is flawed because it does not recognize the degree to which the claims being made depend on the presumption that one is working with a very strong (i.e., classical) logic
Proceedings of the IJCAI-09 Workshop on Nonmonotonic Reasoning, Action and Change
Copyright in each article is held by the authors.
Please contact the authors directly for permission to reprint or use this material in any form for any purpose.The biennial workshop on Nonmonotonic Reasoning, Action
and Change (NRAC) has an active and loyal community.
Since its inception in 1995, the workshop has been held seven
times in conjunction with IJCAI, and has experienced growing
success. We hope to build on this success again this eighth
year with an interesting and fruitful day of discussion.
The areas of reasoning about action, non-monotonic reasoning
and belief revision are among the most active research
areas in Knowledge Representation, with rich inter-connections
and practical applications including robotics, agentsystems,
commonsense reasoning and the semantic web.
This workshop provides a unique opportunity for researchers
from all three fields to be brought together at a single forum
with the prime objectives of communicating important recent
advances in each field and the exchange of ideas. As these
fundamental areas mature it is vital that researchers maintain
a dialog through which they can cooperatively explore
common links. The goal of this workshop is to work against
the natural tendency of such rapidly advancing fields to drift
apart into isolated islands of specialization.
This year, we have accepted ten papers authored by a diverse
international community. Each paper has been subject
to careful peer review on the basis of innovation, significance
and relevance to NRAC. The high quality selection of work
could not have been achieved without the invaluable help of
the international Program Committee.
A highlight of the workshop will be our invited speaker
Professor Hector Geffner from ICREA and UPF in Barcelona,
Spain, discussing representation and inference in modern
planning. Hector Geffner is a world leader in planning,
reasoning, and knowledge representation; in addition to his
many important publications, he is a Fellow of the AAAI, an
associate editor of the Journal of Artificial Intelligence Research
and won an ACM Distinguished Dissertation Award
in 1990
Logic-Based Explainability in Machine Learning
The last decade witnessed an ever-increasing stream of successes in Machine
Learning (ML). These successes offer clear evidence that ML is bound to become
pervasive in a wide range of practical uses, including many that directly
affect humans. Unfortunately, the operation of the most successful ML models is
incomprehensible for human decision makers. As a result, the use of ML models,
especially in high-risk and safety-critical settings is not without concern. In
recent years, there have been efforts on devising approaches for explaining ML
models. Most of these efforts have focused on so-called model-agnostic
approaches. However, all model-agnostic and related approaches offer no
guarantees of rigor, hence being referred to as non-formal. For example, such
non-formal explanations can be consistent with different predictions, which
renders them useless in practice. This paper overviews the ongoing research
efforts on computing rigorous model-based explanations of ML models; these
being referred to as formal explanations. These efforts encompass a variety of
topics, that include the actual definitions of explanations, the
characterization of the complexity of computing explanations, the currently
best logical encodings for reasoning about different ML models, and also how to
make explanations interpretable for human decision makers, among others
Introduction to Discrete Mathematics: An OER for MA-471
The first objective of this book is to define and discuss the meaning of truth in mathematics. We explore logics, both propositional and first-order , and the construction of proofs, both formally and human-targeted. Using the proof tools, this book then explores some very fundamental definitions of mathematics through set theory. This theory is then put in practice in several applications. The particular (but quite widespread) case of equivalence and order relations is studied with detail. Then we introduces sequences and proofs by induction, followed by number theory. Finally, a small introduction to combinatorics is given
Foundations of Software Science and Computation Structures
This open access book constitutes the proceedings of the 25th International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2022, which was held during April 4-6, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 23 regular papers presented in this volume were carefully reviewed and selected from 77 submissions. They deal with research on theories and methods to support the analysis, integration, synthesis, transformation, and verification of programs and software systems
- âŚ