243,626 research outputs found
Multi-Agent Only Knowing
Levesque introduced a notion of ``only knowing'', with the goal of capturing
certain types of nonmonotonic reasoning. Levesque's logic dealt with only the
case of a single agent. Recently, both Halpern and Lakemeyer independently
attempted to extend Levesque's logic to the multi-agent case. Although there
are a number of similarities in their approaches, there are some significant
differences. In this paper, we reexamine the notion of only knowing, going back
to first principles. In the process, we simplify Levesque's completeness proof,
and point out some problems with the earlier definitions. This leads us to
reconsider what the properties of only knowing ought to be. We provide an axiom
system that captures our desiderata, and show that it has a semantics that
corresponds to it. The axiom system has an added feature of interest: it
includes a modal operator for satisfiability, and thus provides a complete
axiomatization for satisfiability in the logic K45.Comment: To appear, Journal of Logic and Computatio
Multi-Agent Only-Knowing Revisited
Levesque introduced the notion of only-knowing to precisely capture the
beliefs of a knowledge base. He also showed how only-knowing can be used to
formalize non-monotonic behavior within a monotonic logic. Despite its appeal,
all attempts to extend only-knowing to the many agent case have undesirable
properties. A belief model by Halpern and Lakemeyer, for instance, appeals to
proof-theoretic constructs in the semantics and needs to axiomatize validity as
part of the logic. It is also not clear how to generalize their ideas to a
first-order case. In this paper, we propose a new account of multi-agent
only-knowing which, for the first time, has a natural possible-world semantics
for a quantified language with equality. We then provide, for the propositional
fragment, a sound and complete axiomatization that faithfully lifts Levesque's
proof theory to the many agent case. We also discuss comparisons to the earlier
approach by Halpern and Lakemeyer.Comment: Appears in Principles of Knowledge Representation and Reasoning 201
Multi-Agent Only Knowing on Planet Kripke
International audienceThe idea of only knowing is a natural and intuitive notion to precisely capture the beliefs of a knowledge base. However, an extension to the many agent case, as would be needed in many applications , has been shown to be far from straightforward. For example, previous Kripke frame-based accounts appeal to proof-theoretic constructions like canonical models, while more recent works in the area abandoned Kripke semantics entirely. We propose a new account based on Moss' characteristic formulas, formulated for the usual Kripke semantics. This is shown to come with other benefits: the logic admits a group version of only knowing , and an operator for assessing the epistemic en-trenchment of what an agent or a group only knows is definable. Finally, the multi-agent only knowing operator is shown to be expressible with the cover modality of classical modal logic, which then allows us to obtain a completeness result for a fragment of the logic
Motion Coordination Problems with Collision Avoidance for Multi-Agent Systems
This chapter studies the collision avoidance problem in the motion coordination control strategies for multi-agent systems. The proposed control strategies are decentralised, since agents have no global knowledge of the goal to achieve, knowing only the position and velocity of some agents. These control strategies allow a set of mobile agents achieve formations, formation tracking and containment. For the collision avoidance, we add a repulsive vector field of the unstable focus type to the motion coordination control strategies. We use formation graphs to represent interactions between agents. The results are presented for the front points of differential-drive mobile robots. The theoretical results are verified by numerical simulation
Learnability with PAC Semantics for Multi-agent Beliefs
The tension between deduction and induction is perhaps the most fundamental
issue in areas such as philosophy, cognition and artificial intelligence. In an
influential paper, Valiant recognised that the challenge of learning should be
integrated with deduction. In particular, he proposed a semantics to capture
the quality possessed by the output of Probably Approximately Correct (PAC)
learning algorithms when formulated in a logic. Although weaker than classical
entailment, it allows for a powerful model-theoretic framework for answering
queries. In this paper, we provide a new technical foundation to demonstrate
PAC learning with multi-agent epistemic logics. To circumvent the negative
results in the literature on the difficulty of robust learning with the PAC
semantics, we consider so-called implicit learning where we are able to
incorporate observations to the background theory in service of deciding the
entailment of an epistemic query. We prove correctness of the learning
procedure and discuss results on the sample complexity, that is how many
observations we will need to provably assert that the query is entailed given a
user-specified error bound. Finally, we investigate under what circumstances
this algorithm can be made efficient. On the last point, given that reasoning
in epistemic logics especially in multi-agent epistemic logics is
PSPACE-complete, it might seem like there is no hope for this problem. We
leverage some recent results on the so-called Representation Theorem explored
for single-agent and multi-agent epistemic logics with the only knowing
operator to reduce modal reasoning to propositional reasoning
Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback
Online gradient descent (OGD) is well known to be doubly optimal under strong
convexity or monotonicity assumptions: (1) in the single-agent setting, it
achieves an optimal regret of for strongly convex cost
functions; and (2) in the multi-agent setting of strongly monotone games, with
each agent employing OGD, we obtain last-iterate convergence of the joint
action to a unique Nash equilibrium at an optimal rate of
. While these finite-time guarantees highlight its merits,
OGD has the drawback that it requires knowing the strong convexity/monotonicity
parameters. In this paper, we design a fully adaptive OGD algorithm,
\textsf{AdaOGD}, that does not require a priori knowledge of these parameters.
In the single-agent setting, our algorithm achieves regret under
strong convexity, which is optimal up to a log factor. Further, if each agent
employs \textsf{AdaOGD} in strongly monotone games, the joint action converges
in a last-iterate sense to a unique Nash equilibrium at a rate of
, again optimal up to log factors. We illustrate our
algorithms in a learning version of the classical newsvendor problem, where due
to lost sales, only (noisy) gradient feedback can be observed. Our results
immediately yield the first feasible and near-optimal algorithm for both the
single-retailer and multi-retailer settings. We also extend our results to the
more general setting of exp-concave cost functions and games, using the online
Newton step (ONS) algorithm.Comment: Accepted by Operations Research; 47 page
- …