151 research outputs found

    Complexity of Bradley-Manna-Sipma Lexicographic Ranking Functions

    Get PDF
    In this paper we turn the spotlight on a class of lexicographic ranking functions introduced by Bradley, Manna and Sipma in a seminal CAV 2005 paper, and establish for the first time the complexity of some problems involving the inference of such functions for linear-constraint loops (without precondition). We show that finding such a function, if one exists, can be done in polynomial time in a way which is sound and complete when the variables range over the rationals (or reals). We show that when variables range over the integers, the problem is harder -- deciding the existence of a ranking function is coNP-complete. Next, we study the problem of minimizing the number of components in the ranking function (a.k.a. the dimension). This number is interesting in contexts like computing iteration bounds and loop parallelization. Surprisingly, and unlike the situation for some other classes of lexicographic ranking functions, we find that even deciding whether a two-component ranking function exists is harder than the unrestricted problem: NP-complete over the rationals and Σ2P\Sigma^P_2-complete over the integers.Comment: Technical report for a corresponding CAV'15 pape

    Logics for Unranked Trees: An Overview

    Get PDF
    Labeled unranked trees are used as a model of XML documents, and logical languages for them have been studied actively over the past several years. Such logics have different purposes: some are better suited for extracting data, some for expressing navigational properties, and some make it easy to relate complex properties of trees to the existence of tree automata for those properties. Furthermore, logics differ significantly in their model-checking properties, their automata models, and their behavior on ordered and unordered trees. In this paper we present a survey of logics for unranked trees

    Improving Strategies via SMT Solving

    Full text link
    We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Pi-p-2-complete

    The Complexity of Computing Minimal Unidirectional Covering Sets

    Full text link
    Given a binary dominance relation on a set of alternatives, a common thread in the social sciences is to identify subsets of alternatives that satisfy certain notions of stability. Examples can be found in areas as diverse as voting theory, game theory, and argumentation theory. Brandt and Fischer [BF08] proved that it is NP-hard to decide whether an alternative is contained in some inclusion-minimal upward or downward covering set. For both problems, we raise this lower bound to the Theta_{2}^{p} level of the polynomial hierarchy and provide a Sigma_{2}^{p} upper bound. Relatedly, we show that a variety of other natural problems regarding minimal or minimum-size covering sets are hard or complete for either of NP, coNP, and Theta_{2}^{p}. An important consequence of our results is that neither minimal upward nor minimal downward covering sets (even when guaranteed to exist) can be computed in polynomial time unless P=NP. This sharply contrasts with Brandt and Fischer's result that minimal bidirectional covering sets (i.e., sets that are both minimal upward and minimal downward covering sets) are polynomial-time computable.Comment: 27 pages, 7 figure

    Biabduction (and related problems) in array separation logic

    Get PDF
    We investigate array separation logic (\mathsf {ASL}), a variant of symbolic-heap separation logic in which the data structures are either pointers or arrays, i.e., contiguous blocks of memory. This logic provides a language for compositional memory safety proofs of array programs. We focus on the biabduction problem for this logic, which has been established as the key to automatic specification inference at the industrial scale. We present an \mathsf {NP} decision procedure for biabduction in \mathsf {ASL}, and we also show that the problem of finding a consistent solution is \mathsf {NP}-hard. Along the way, we study satisfiability and entailment in \mathsf {ASL}, giving decision procedures and complexity bounds for both problems. We show satisfiability to be \mathsf {NP}-complete, and entailment to be decidable with high complexity. The surprising fact that biabduction is simpler than entailment is due to the fact that, as we show, the element of choice over biabduction solutions enables us to dramatically reduce the search space

    Opacity Issues in Games with Imperfect Information

    Get PDF
    We study in depth the class of games with opacity condition, which are two-player games with imperfect information in which one of the players only has imperfect information, and where the winning condition relies on the information he has along the play. Those games are relevant for security aspects of computing systems: a play is opaque whenever the player who has imperfect information never "knows" for sure that the current position is one of the distinguished "secret" positions. We study the problems of deciding the existence of a winning strategy for each player, and we call them the opacity-violate problem and the opacity-guarantee problem. Focusing on the player with perfect information is new in the field of games with imperfect-information because when considering classical winning conditions it amounts to solving the underlying perfect-information game. We establish the EXPTIME-completeness of both above-mentioned problems, showing that our winning condition brings a gap of complexity for the player with perfect information, and we exhibit the relevant opacity-verify problem, which noticeably generalizes approaches considered in the literature for opacity analysis in discrete-event systems. In the case of blindfold games, this problem relates to the two initial ones, yielding the determinacy of blindfold games with opacity condition and the PSPACE-completeness of the three problems.Comment: In Proceedings GandALF 2011, arXiv:1106.081

    On the Complexity of Scheduling in Wireless Networks

    Get PDF
    We consider the problem of throughput-optimal scheduling in wireless networks subject to interference constraints. We model the interference using a family of K-hop interference models, under which no two links within a K-hop distance can successfully transmit at the same time. For a given K, we can obtain a throughput-optimal scheduling policy by solving the well-known maximum weighted matching problem. We show that for K > 1, the resulting problems are NP-Hard that cannot be approximated within a factor that grows polynomially with the number of nodes. Interestingly, for geometric unit-disk graphs that can be used to describe a wide range of wireless networks, the problems admit polynomial time approximation schemes within a factor arbitrarily close to 1. In these network settings, we also show that a simple greedy algorithm can provide a 49-approximation, and the maximal matching scheduling policy, which can be easily implemented in a distributed fashion, achieves a guaranteed fraction of the capacity region for "all K." The geometric constraints are crucial to obtain these throughput guarantees. These results are encouraging as they suggest that one can develop low-complexity distributed algorithms to achieve near-optimal throughput for a wide range of wireless networksopen1

    Natural Killer Cell Mediated Cytotoxic Responses in the Tasmanian Devil

    Get PDF
    The Tasmanian devil (Sarcophilus harrisii), the world's largest marsupial carnivore, is under threat of extinction following the emergence of an infectious cancer. Devil facial tumour disease (DFTD) is spread between Tasmanian devils during biting. The disease is consistently fatal and devils succumb without developing a protective immune response. The aim of this study was to determine if Tasmanian devils were capable of forming cytotoxic antitumour responses and develop antibodies against DFTD cells and foreign tumour cells. The two Tasmanian devils immunised with irradiated DFTD cells did not form cytotoxic or humoral responses against DFTD cells, even after multiple immunisations. However, following immunisation with xenogenic K562 cells, devils did produce cytotoxic responses and antibodies against this foreign tumour cell line. The cytotoxicity appeared to occur through the activity of natural killer (NK) cells in an antibody dependent manner. Classical NK cell responses, such as innate killing of DFTD and foreign cancer cells, were not observed. Cells with an NK-like phenotype comprised approximately 4 percent of peripheral blood mononuclear cells. The results of this study suggest that Tasmanian devils have NK cells with functional cytotoxic pathways. Although devil NK cells do not directly recognise DFTD cancer cells, the development of antibody dependent cell-mediated cytotoxicity presents a potential pathway to induce cytotoxic responses against the disease. These findings have positive implications for future DFTD vaccine research
    corecore