7,478 research outputs found

    Comparing approaches for model-checking strategies under imperfect information and fairness constraints

    Get PDF
    Starting from Alternating-time Temporal Logic, many logics for reasoning about strategies in a system of agents have been proposed. Some of them consider the strategies that agents can play when they have partial information about the state of the system. ATLKirF is such a logic to reason about uniform strategies under unconditional fairness constraints. While this kind of logics has been extensively studied, practical approaches for solving their model- checking problem appeared only recently. This paper considers three approaches for model checking strategies under partial observability of the agents, applied to ATLKirF . These three approaches have been implemented in PyNuSMV, a Python library based on the state-of- the-art model checker NuSMV. Thanks to the experimental results obtained with this library and thanks to the comparison of the relative performance of the approaches, this paper provides indications and guidelines for the use of these verification techniques, showing that different approaches are needed in different situations

    Comparing approaches for model-checking strategies under imperfect information and fairness constraints

    Get PDF
    Starting from Alternating-time Temporal Logic, many logics for reasoning about strategies in a system of agents have been proposed. Some of them consider the strategies that agents can play when they have partial information about the state of the system. ATLKirF is such a logic to reason about uniform strategies under unconditional fairness constraints. While this kind of logics has been extensively studied, practical approaches for solving their model- checking problem appeared only recently. This paper considers three approaches for model checking strategies under partial observability of the agents, applied to ATLKirF . These three approaches have been implemented in PyNuSMV, a Python library based on the state-of- the-art model checker NuSMV. Thanks to the experimental results obtained with this library and thanks to the comparison of the relative performance of the approaches, this paper provides indications and guidelines for the use of these verification techniques, showing that different approaches are needed in different situations

    Robust Monotonic Optimization Framework for Multicell MISO Systems

    Full text link
    The performance of multiuser systems is both difficult to measure fairly and to optimize. Most resource allocation problems are non-convex and NP-hard, even under simplifying assumptions such as perfect channel knowledge, homogeneous channel properties among users, and simple power constraints. We establish a general optimization framework that systematically solves these problems to global optimality. The proposed branch-reduce-and-bound (BRB) algorithm handles general multicell downlink systems with single-antenna users, multiantenna transmitters, arbitrary quadratic power constraints, and robustness to channel uncertainty. A robust fairness-profile optimization (RFO) problem is solved at each iteration, which is a quasi-convex problem and a novel generalization of max-min fairness. The BRB algorithm is computationally costly, but it shows better convergence than the previously proposed outer polyblock approximation algorithm. Our framework is suitable for computing benchmarks in general multicell systems with or without channel uncertainty. We illustrate this by deriving and evaluating a zero-forcing solution to the general problem.Comment: Published in IEEE Transactions on Signal Processing, 16 pages, 9 figures, 2 table

    Reasoning about memoryless strategies under partial observability and unconditional fairness constraints

    Get PDF
    Alternating-time Temporal Logic is a logic to reason about strategies that agents can adopt to achieve a specified collective goal. A number of extensions for this logic exist; some of them combine strategies and partial observability, some others include fairness constraints, but to the best of our knowledge no work provides a unified framework for strategies, partial observability and fairness constraints. Integration of these three concepts is important when reasoning about the capabilities of agents without full knowledge of a system, for instance when the agents can assume that the environment behaves in a fair way. We present ATLKirF, a logic combining strategies under partial observability in a system with fairness constraints on states. We introduce a model-checking algorithm for ATLKirF by extending the algorithm for a full-observability variant of the logic and we investigate its complexity. We validate our proposal with an experimental evaluation

    Multi-Valued Verification of Strategic Ability

    Full text link
    Some multi-agent scenarios call for the possibility of evaluating specifications in a richer domain of truth values. Examples include runtime monitoring of a temporal property over a growing prefix of an infinite path, inconsistency analysis in distributed databases, and verification methods that use incomplete anytime algorithms, such as bounded model checking. In this paper, we present multi-valued alternating-time temporal logic (mv-ATL*), an expressive logic to specify strategic abilities in multi-agent systems. It is well known that, for branching-time logics, a general method for model-independent translation from multi-valued to two-valued model checking exists. We show that the method cannot be directly extended to mv-ATL*. We also propose two ways of overcoming the problem. Firstly, we identify constraints on formulas for which the model-independent translation can be suitably adapted. Secondly, we present a model-dependent reduction that can be applied to all formulas of mv-ATL*. We show that, in all cases, the complexity of verification increases only linearly when new truth values are added to the evaluation domain. We also consider several examples that show possible applications of mv-ATL* and motivate its use for model checking multi-agent systems

    Fairea: A Model Behaviour Mutation Approach to Benchmarking Bias Mitigation Methods

    Get PDF
    The increasingly wide uptake of Machine Learning (ML) has raised the significance of the problem of tackling bias (i.e., unfairness), making it a primary software engineering concern. In this paper, we introduce Fairea, a model behaviour mutation approach to benchmarking ML bias mitigation methods. We also report on a large-scale empirical study to test the effectiveness of 12 widely-studied bias mitigation methods. Our results reveal that, surprisingly, bias mitigation methods have a poor effectiveness in 49% of the cases. In particular, 15% of the mitigation cases have worse fairness-accuracy trade-offs than the baseline established by Fairea; 34% of the cases have a decrease in accuracy and an increase in bias. Fairea has been made publicly available for software engineers and researchers to evaluate their bias mitigation methods
    corecore