919 research outputs found
Multi-agent verification and control with probabilistic model checking
Probabilistic model checking is a technique for formal automated reasoning about software or hardware systems that operate in the
context of uncertainty or stochasticity. It builds upon ideas and techniques from a diverse range of fields, from logic, automata and graph
theory, to optimisation, numerical methods and control. In recent years,
probabilistic model checking has also been extended to integrate ideas
from game theory, notably using models such as stochastic games and
solution concepts such as equilibria, to formally verify the interaction of
multiple rational agents with distinct objectives. This provides a means
to reason flexibly about agents acting in either an adversarial or a collaborative fashion, and opens up opportunities to tackle new problems
within, for example, artificial intelligence, robotics and autonomous systems. In this paper, we summarise some of the advances in this area,
and highlight applications for which they have already been used. We
discuss how the strengths of probabilistic model checking apply, or have
the potential to apply, to the multi-agent setting and outline some of the
key challenges required to make further progress in this field
Fifty years of Hoare's Logic
We present a history of Hoare's logic.Comment: 79 pages. To appear in Formal Aspects of Computin
Causality and Temporal Dependencies in the Design of Fault Management Systems
Reasoning about causes and effects naturally arises in the engineering of
safety-critical systems. A classical example is Fault Tree Analysis, a
deductive technique used for system safety assessment, whereby an undesired
state is reduced to the set of its immediate causes. The design of fault
management systems also requires reasoning on causality relationships. In
particular, a fail-operational system needs to ensure timely detection and
identification of faults, i.e. recognize the occurrence of run-time faults
through their observable effects on the system. Even more complex scenarios
arise when multiple faults are involved and may interact in subtle ways.
In this work, we propose a formal approach to fault management for complex
systems. We first introduce the notions of fault tree and minimal cut sets. We
then present a formal framework for the specification and analysis of
diagnosability, and for the design of fault detection and identification (FDI)
components. Finally, we review recent advances in fault propagation analysis,
based on the Timed Failure Propagation Graphs (TFPG) formalism.Comment: In Proceedings CREST 2017, arXiv:1710.0277
Multi-Valued Verification of Strategic Ability
Some multi-agent scenarios call for the possibility of evaluating
specifications in a richer domain of truth values. Examples include runtime
monitoring of a temporal property over a growing prefix of an infinite path,
inconsistency analysis in distributed databases, and verification methods that
use incomplete anytime algorithms, such as bounded model checking. In this
paper, we present multi-valued alternating-time temporal logic (mv-ATL*), an
expressive logic to specify strategic abilities in multi-agent systems. It is
well known that, for branching-time logics, a general method for
model-independent translation from multi-valued to two-valued model checking
exists. We show that the method cannot be directly extended to mv-ATL*. We also
propose two ways of overcoming the problem. Firstly, we identify constraints on
formulas for which the model-independent translation can be suitably adapted.
Secondly, we present a model-dependent reduction that can be applied to all
formulas of mv-ATL*. We show that, in all cases, the complexity of verification
increases only linearly when new truth values are added to the evaluation
domain. We also consider several examples that show possible applications of
mv-ATL* and motivate its use for model checking multi-agent systems
Model Checking Trust-based Multi-Agent Systems
Trust has been the focus of many research projects, both theoretical and practical, in
the recent years, particularly in domains where open multi-agent technologies are applied
(e.g., Internet-based markets, Information retrieval, etc.). The importance of trust in such
domains arises mainly because it provides a social control that regulates the relationships
and interactions among agents. Despite the growing number of various multi-agent applications, they still encounter many challenges in their formal modeling and the verification
of agentsâ behaviors. Many formalisms and approaches that facilitate the specifications of
trust in Multi-Agent Systems (MASs) can be found in the literature. However, most of these
approaches focus on the cognitive side of trust where the trusting entity is normally capable
of exhibiting properties about beliefs, desires, and intentions. Hence, the trust is considered
as a belief of an agent (the truster) involving ability and willingness of the trustee to perform some actions for the truster. Nevertheless, in open MASs, entities can join and leave
the interactions at any time. This means MASs will actually provide no guarantee about the
behavior of their agents, which makes the capability of reasoning about trust and checking
the existence of untrusted computations highly desired.
This thesis aims to address the problem of modeling and verifying at design time
trust in MASs by (1) considering a cognitive-independent view of trust where trust ingredients are seen from a non-epistemic angle, (2) introducing a logical language named Trust
Computation Tree Logic (TCTL), which extends CTL with preconditional, conditional, and graded trust operators along with a set of reasoning postulates in order to explore its capabilities, (3) proposing a new accessibility relation which is needed to define the semantics
of the trust modal operators. This accessibility relation is defined so that it captures the
intuition of trust while being easily computable, (4) investigating the most intuitive and
efficient algorithm for computing the trust set by developing, implementing, and experimenting different model checking techniques in order to compare between them in terms of
memory consumption, efficiency, and scalability with regard to the number of considered
agents, (5) evaluating the performance of the model checking techniques by analyzing the
time and space complexity.
The approach has been applied to different application domains to evaluate its computational performance and scalability. The obtained results reveal the effectiveness of the
proposed approach, making it a promising methodology in practice
Towards the Verification of Pervasive Systems
Pervasive systems, that is roughly speaking systems that can interact with their environment, are increasingly common. In such systems, there are many dimensions to assess: security and reliability, safety and liveness, real-time response, etc. So far modelling and formalizing attempts have been very piecemeal approaches. This paper describes our analysis of a pervasive case study (MATCH, a homecare application) and our proposal for formal (particularly verification) approaches. Our goal is to see to what extent current state of the art formal methods are capable of coping with the verification demand introduced by pervasive systems, and to point out their limitations
Strategic Abilities of Forgetful Agents in Stochastic Environments
In this paper, we investigate the probabilistic variants of the strategy
logics ATL and ATL* under imperfect information. Specifically, we present novel
decidability and complexity results when the model transitions are stochastic
and agents play uniform strategies. That is, the semantics of the logics are
based on multi-agent, stochastic transition systems with imperfect information,
which combine two sources of uncertainty, namely, the partial observability
agents have on the environment, and the likelihood of transitions to occur from
a system state. Since the model checking problem is undecidable in general in
this setting, we restrict our attention to agents with memoryless (positional)
strategies. The resulting setting captures the situation in which agents have
qualitative uncertainty of the local state and quantitative uncertainty about
the occurrence of future events. We illustrate the usefulness of this setting
with meaningful examples
Towards Assume-Guarantee Verification of Strategic Ability
Formal verification of strategic abilities is a hard problem. We propose to
use the methodology of assume-guarantee reasoning in order to facilitate model
checking of alternating-time temporal logic with imperfect information and
imperfect recall
Probabilistic and Epistemic Model Checking for Multi-Agent Systems
Model checking is a formal technique widely used to verify security and communication protocols in epistemic multi-agent systems against given properties. Qualitative
properties such as safety and liveliness have been widely analysed in the literature. However, systems also have quantitative and uncertain (i.e., probabilistic) properties such as degree of reliability and reachability, which still need further attention from the model checking perspective. In this dissertation, we analyse such properties and present a new method for probabilistic model checking of epistemic multi-agent
systems specified by a new probabilistic-epistemic logic PCTLK. We model multiagent systems distributed knowledge bases using probabilistic interpreted systems. We also define transformations from those interpreted systems into discrete-time Markov chains and from PCTLK formulae to PCTL formulae, an existing extension of CTL with probabilities. By so doing, we are able to convert the PCTLK model checking problem into the PCTL one. We address the problem of verifying probabilistic properties
and epistemic properties in concurrent probabilistic systems as well. We then prove that model checking a formula of PCTLK in concurrent probabilistic systems is
PSPACE-complete. Furthermore, we represent models associated with PCTLK logic symbolically with Multi-Terminal Binary Decision Diagrams (MTBDDs).
Finally, we make use of PRISM, the model checker of PCTL without adding new computation cost. Dining cryptographers protocol is implemented to show the
applicability of the proposed technique along with performance analysis and comparison in terms of execution time and state space scalability with MCK, an existing
epistemic-probabilistic model checker, and MCMAS, a model checker for multi-agent systems. Another example, NetBill protocol, is also implemented with PRISM to verify probabilistic epistemic properties and to evaluate the complexity of this verification
- âŠ