443 research outputs found
Schulze and Ranked-Pairs Voting are Fixed-Parameter Tractable to Bribe, Manipulate, and Control
Schulze and ranked-pairs elections have received much attention recently, and
the former has quickly become a quite widely used election system. For many
cases these systems have been proven resistant to bribery, control, or
manipulation, with ranked pairs being particularly praised for being NP-hard
for all three of those. Nonetheless, the present paper shows that with respect
to the number of candidates, Schulze and ranked-pairs elections are
fixed-parameter tractable to bribe, control, and manipulate: we obtain uniform,
polynomial-time algorithms whose degree does not depend on the number of
candidates. We also provide such algorithms for some weighted variants of these
problems
Playing to Learn, or to Keep Secret: Alternating-Time Logic Meets Information Theory
Many important properties of multi-agent systems refer to the participants'
ability to achieve a given goal, or to prevent the system from an undesirable
event. Among intelligent agents, the goals are often of epistemic nature, i.e.,
concern the ability to obtain knowledge about an important fact \phi. Such
properties can be e.g. expressed in ATLK, that is, alternating-time temporal
logic ATL extended with epistemic operators. In many realistic scenarios,
however, players do not need to fully learn the truth value of \phi. They may
be almost as well off by gaining some knowledge; in other words, by reducing
their uncertainty about \phi. Similarly, in order to keep \phi secret, it is
often insufficient that the intruder never fully learns its truth value.
Instead, one needs to require that his uncertainty about \phi never drops below
a reasonable threshold.
With this motivation in mind, we introduce the logic ATLH, extending ATL with
quantitative modalities based on the Hartley measure of uncertainty. The new
logic enables to specify agents' abilities w.r.t. the uncertainty of a given
player about a given set of statements. It turns out that ATLH has the same
expressivity and model checking complexity as ATLK. However, the new logic is
exponentially more succinct than ATLK, which is the main technical result of
this paper
Asimovian Adaptive Agents
The goal of this research is to develop agents that are adaptive and
predictable and timely. At first blush, these three requirements seem
contradictory. For example, adaptation risks introducing undesirable side
effects, thereby making agents' behavior less predictable. Furthermore,
although formal verification can assist in ensuring behavioral predictability,
it is known to be time-consuming. Our solution to the challenge of satisfying
all three requirements is the following. Agents have finite-state automaton
plans, which are adapted online via evolutionary learning (perturbation)
operators. To ensure that critical behavioral constraints are always satisfied,
agents' plans are first formally verified. They are then reverified after every
adaptation. If reverification concludes that constraints are violated, the
plans are repaired. The main objective of this paper is to improve the
efficiency of reverification after learning, so that agents have a sufficiently
rapid response time. We present two solutions: positive results that certain
learning operators are a priori guaranteed to preserve useful classes of
behavioral assurance constraints (which implies that no reverification is
needed for these operators), and efficient incremental reverification
algorithms for those learning operators that have negative a priori results
Natural Strategic Ability
International audienc
On the Complexity of Rational Verification
Rational verification refers to the problem of checking which temporal logic
properties hold of a concurrent multiagent system, under the assumption that
agents in the system choose strategies that form a game-theoretic equilibrium.
Rational verification can be understood as a counterpart to model checking for
multiagent systems, but while classical model checking can be done in
polynomial time for some temporal logic specification languages such as CTL,
and polynomial space with LTL specifications, rational verification is much
harder: the key decision problems for rational verification are
2EXPTIME-complete with LTL specifications, even when using explicit-state
system representations. Against this background, our contributions in this
paper are threefold. First, we show that the complexity of rational
verification can be greatly reduced by restricting specifications to GR(1), a
fragment of LTL that can represent a broad and practically useful class of
response properties of reactive systems. In particular, we show that for a
number of relevant settings, rational verification can be done in polynomial
space and even in polynomial time. Second, we provide improved complexity
results for rational verification when considering players' goals given by
mean-payoff utility functions; arguably the most widely used approach for
quantitative objectives in concurrent and multiagent systems. Finally, we
consider the problem of computing outcomes that satisfy social welfare
constraints. To this end, we consider both utilitarian and egalitarian social
welfare and show that computing such outcomes is either PSPACE-complete or
NP-complete.Comment: Preprint submitted to Annals of Mathematics and Artificial
Intelligenc
Relentful Strategic Reasoning in 1 Alternating-Time Temporal Logic
Temporal logics are a well investigated formalism for the specification, verification, and synthesis of reactive systems.
Within this family, Alternating-Time Temporal Logic (ATL , for short) has been introduced as a useful generalization
of classical linear- and branching-time temporal logics, by allowing temporal operators to be indexed by coalitions of
agents. Classically, temporal logics are memoryless: once a path in the computation tree is quantified at a given node,
the computation that has led to that node is forgotten. Recently, mCTL has been defined as a memoryful variant
of CTL , where path quantification is memoryful. In the context of multi-agent planning, memoryful quantification
enables agents to “relent” and change their goals and strategies depending on their history.
In this paper, we define mATL , a memoryful extension of ATL , in which a formula is satisfied at a certain
node of a path by taking into account both the future and the past. We study the expressive power of mATL ,
its succinctness, as well as related decision problems. We also investigate the relationship between memoryful
quantification and past modalities and show their equivalence. We show that both the memoryful and the past
extensions come without any computational price; indeed, we prove that both the satisfiability and the model-checking
problems are 2EXPTIME-COMPLETE, as they are for AT
- …