20 research outputs found
Pretty Good Strategies and Where to Find Them
Synthesis of bulletproof strategies in imperfect information scenarios is a
notoriously hard problem. In this paper, we suggest that it is sometimes a
viable alternative to aim at "reasonably good" strategies instead. This makes
sense not only when an ideal strategy cannot be found due to the complexity of
the problem, but also when no winning strategy exists at all. We propose an
algorithm for synthesis of such "pretty good" strategies. The idea is to first
generate a surely winning strategy with perfect information, and then
iteratively improve it with respect to two criteria of dominance: one based on
the amount of conflicting decisions in the strategy, and the other related to
the tightness of its outcome set. We focus on reachability goals and evaluate
the algorithm experimentally with very promising results
Towards Assume-Guarantee Verification of Strategic Ability
Formal verification of strategic abilities is a hard problem. We propose to
use the methodology of assume-guarantee reasoning in order to facilitate model
checking of alternating-time temporal logic with imperfect information and
imperfect recall
Towards Modelling and Verification of Social Explainable AI
Social Explainable AI (SAI) is a new direction in artificial intelligence
that emphasises decentralisation, transparency, social context, and focus on
the human users. SAI research is still at an early stage. Consequently, it
concentrates on delivering the intended functionalities, but largely ignores
the possibility of unwelcome behaviours due to malicious or erroneous activity.
We propose that, in order to capture the breadth of relevant aspects, one can
use models and logics of strategic ability, that have been developed in
multi-agent systems. Using the STV model checker, we take the first step
towards the formal modelling and verification of SAI environments, in
particular of their resistance to various types of attacks by compromised AI
modules
Assume-Guarantee Verification of Strategic Ability
Model checking of strategic abilities is a notoriously hard problem, even
more so in the realistic case of agents with imperfect information.
Assume-guarantee reasoning can be of great help here, providing a way to
decompose the complex problem into a small set of exponentially easier
subproblems. In this paper, we propose two schemes for assume-guarantee
verification of alternating-time temporal logic with imperfect information. We
prove the soundness of both schemes, and discuss their completeness. We
illustrate the method by examples based on known benchmarks, and show
experimental results that demonstrate the practical benefits of the approach
STV+Reductions: Towards Practical Verification of Strategic Ability Using Model Reductions
We present a substantially expanded version of our tool STV for strategy
synthesis and verification of strategic abilities. The new version adds
user-definable models and support for model reduction through partial order
reduction and checking for bisimulation