55 research outputs found

    Robust Exponential Worst Cases for Divide-et-Impera Algorithms for Parity Games

    Get PDF
    The McNaughton-Zielonka divide et impera algorithm is the simplest and most flexible approach available in the literature for determining the winner in a parity game. Despite its theoretical worst-case complexity and the negative reputation as a poorly effective algorithm in practice, it has been shown to rank among the best techniques for the solution of such games. Also, it proved to be resistant to a lower bound attack, even more than the strategy improvements approaches, and only recently a family of games on which the algorithm requires exponential time has been provided by Friedmann. An easy analysis of this family shows that a simple memoization technique can help the algorithm solve the family in polynomial time. The same result can also be achieved by exploiting an approach based on the dominion-decomposition techniques proposed in the literature. These observations raise the question whether a suitable combination of dynamic programming and game-decomposition techniques can improve on the exponential worst case of the original algorithm. In this paper we answer this question negatively, by providing a robustly exponential worst case, showing that no intertwining of the above mentioned techniques can help mitigating the exponential nature of the divide et impera approaches.Comment: In Proceedings GandALF 2017, arXiv:1709.0176

    A Delayed Promotion Policy for Parity Games

    Full text link
    Parity games are two-player infinite-duration games on graphs that play a crucial role in various fields of theoretical computer science. Finding efficient algorithms to solve these games in practice is widely acknowledged as a core problem in formal verification, as it leads to efficient solutions of the model-checking and satisfiability problems of expressive temporal logics, e.g., the modal muCalculus. Their solution can be reduced to the problem of identifying sets of positions of the game, called dominions, in each of which a player can force a win by remaining in the set forever. Recently, a novel technique to compute dominions, called priority promotion, has been proposed, which is based on the notions of quasi dominion, a relaxed form of dominion, and dominion space. The underlying framework is general enough to accommodate different instantiations of the solution procedure, whose correctness is ensured by the nature of the space itself. In this paper we propose a new such instantiation, called delayed promotion, that tries to reduce the possible exponential behaviours exhibited by the original method in the worst case. The resulting procedure not only often outperforms the original priority promotion approach, but so far no exponential worst case is known.Comment: In Proceedings GandALF 2016, arXiv:1609.0364

    A delayed promotion policy for parity games

    Get PDF

    A Parity Game Tale of Two Counters

    Get PDF
    Parity games are simple infinite games played on finite graphs with a winning condition that is expressive enough to capture nested least and greatest fixpoints. Through their tight relationship to the modal mu-calculus, they are used in practice for the model-checking and synthesis problems of the mu-calculus and related temporal logics like LTL and CTL. Solving parity games is a compelling complexity theoretic problem, as the problem lies in the intersection of UP and co-UP and is believed to admit a polynomial-time solution, motivating researchers to either find such a solution or to find superpolynomial lower bounds for existing algorithms to improve the understanding of parity games. We present a parameterized parity game called the Two Counters game, which provides an exponential lower bound for a wide range of attractor-based parity game solving algorithms. We are the first to provide an exponential lower bound to priority promotion with the delayed promotion policy, and the first to provide such a lower bound to tangle learning.Comment: In Proceedings GandALF 2019, arXiv:1909.0597

    Priority Promotion with Parysian Flair

    Get PDF
    We develop an algorithm that combines the advantages of priority promotion - one of the leading approaches to solving large parity games in practice - with the quasi-polynomial time guarantees offered by Parys' algorithm. Hybridising these algorithms sounds both natural and difficult, as they both generalise the classic recursive algorithm in different ways that appear to be irreconcilable: while the promotion transcends the call structure, the guarantees change on each level. We show that an interface that respects both is not only effective, but also efficient

    REGISTER GAMES

    Get PDF
    The complexity of parity games is a long standing open problem that saw a major breakthrough in 2017 when two quasi-polynomial algorithms were published. This article presents a third, independent approach to solving parity games in quasi-polynomial time, based on the notion of register game, a parameterised variant of a parity game. The analysis of register games leads to a quasi-polynomial algorithm for parity games, a polynomial algorithm for restricted classes of parity games and a novel measure of complexity, the register index, which aims to capture the combined complexity of the priority assignement and the underlying game graph. We further present a translation of alternating parity word automata into alternating weak automata with only a quasi-polynomial increase in size, based on register games; this improves on the previous exponential translation. We also use register games to investigate the parity index hierarchy: while for words the index hierarchy of alternating parity automata collapses to the weak level, and for trees it is strict, for structures between trees and words, it collapses logarithmically, in the sense that any parity tree automaton of size n is equivalent, on these particular classes of structures, to an automaton with a number of priorities logarithmic in n

    Data Collection and Quality Challenges in Deep Learning: A Data-Centric AI Perspective

    Full text link
    Data-centric AI is at the center of a fundamental shift in software engineering where machine learning becomes the new software, powered by big data and computing infrastructure. Here software engineering needs to be re-thought where data becomes a first-class citizen on par with code. One striking observation is that a significant portion of the machine learning process is spent on data preparation. Without good data, even the best machine learning algorithms cannot perform well. As a result, data-centric AI practices are now becoming mainstream. Unfortunately, many datasets in the real world are small, dirty, biased, and even poisoned. In this survey, we study the research landscape for data collection and data quality primarily for deep learning applications. Data collection is important because there is lesser need for feature engineering for recent deep learning approaches, but instead more need for large amounts of data. For data quality, we study data validation, cleaning, and integration techniques. Even if the data cannot be fully cleaned, we can still cope with imperfect data during model training using robust model training techniques. In addition, while bias and fairness have been less studied in traditional data management research, these issues become essential topics in modern machine learning applications. We thus study fairness measures and unfairness mitigation techniques that can be applied before, during, or after model training. We believe that the data management community is well poised to solve these problems

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
    • …
    corecore