45 research outputs found

    Parameter Compilation

    Get PDF
    In resolving instances of a computational problem, if multiple instances of interest share a feature in common, it may be fruitful to compile this feature into a format that allows for more efficient resolution, even if the compilation is relatively expensive. In this article, we introduce a formal framework for classifying problems according to their compilability. The basic object in our framework is that of a parameterized problem, which here is a language along with a parameterization---a map which provides, for each instance, a so-called parameter on which compilation may be performed. Our framework is positioned within the paradigm of parameterized complexity, and our notions are relatable to established concepts in the theory of parameterized complexity. Indeed, we view our framework as playing a unifying role, integrating together parameterized complexity and compilability theory

    On the Complexity of Existential Positive Queries

    Full text link
    We systematically investigate the complexity of model checking the existential positive fragment of first-order logic. In particular, for a set of existential positive sentences, we consider model checking where the sentence is restricted to fall into the set; a natural question is then to classify which sentence sets are tractable and which are intractable. With respect to fixed-parameter tractability, we give a general theorem that reduces this classification question to the corresponding question for primitive positive logic, for a variety of representations of structures. This general theorem allows us to deduce that an existential positive sentence set having bounded arity is fixed-parameter tractable if and only if each sentence is equivalent to one in bounded-variable logic. We then use the lens of classical complexity to study these fixed-parameter tractable sentence sets. We show that such a set can be NP-complete, and consider the length needed by a translation from sentences in such a set to bounded-variable logic; we prove superpolynomial lower bounds on this length using the theory of compilability, obtaining an interesting type of formula size lower bound. Overall, the tools, concepts, and results of this article set the stage for the future consideration of the complexity of model checking on more expressive logics

    Parameterized Compilation Lower Bounds for Restricted CNF-formulas

    Full text link
    We show unconditional parameterized lower bounds in the area of knowledge compilation, more specifically on the size of circuits in decomposable negation normal form (DNNF) that encode CNF-formulas restricted by several graph width measures. In particular, we show that - there are CNF formulas of size nn and modular incidence treewidth kk whose smallest DNNF-encoding has size nΩ(k)n^{\Omega(k)}, and - there are CNF formulas of size nn and incidence neighborhood diversity kk whose smallest DNNF-encoding has size nΩ(k)n^{\Omega(\sqrt{k})}. These results complement recent upper bounds for compiling CNF into DNNF and strengthen---quantitatively and qualitatively---known conditional low\-er bounds for cliquewidth. Moreover, they show that, unlike for many graph problems, the parameters considered here behave significantly differently from treewidth

    On the complexity of Existential Positive Queries

    Get PDF
    We systematically investigate the complexity of model checking the existential positive fragment of first-order logic. In particular, for a set of existential positive sentences, we consider model checking where the sentence is restricted to fall into the set; a natural question is then to classify which sentence sets are tractable and which are intractable. With respect to fixed-parameter tractability, we give a general theorem that reduces this classification question to the corresponding question for primitive positive logic, for a variety of representations of structures. This general theorem allows us to deduce that an existential positive sentence set having bounded arity is fixed-parameter tractable if and only if each sentence is equivalent to one in bounded-variable logic. We then use the lens of classical complexity to study these fixed-parameter tractable sentence sets. We show that such a set can be NP-complete, and consider the length needed by a translation from sentences in such a set to bounded-variable logic; we prove superpolynomial lower bounds on this length using the theory of compilability, obtaining an interesting type of formula size lower bound. Overall, the tools, concepts, and results of this article set the stage for the future consideration of the complexity of model checking on more expressive logics

    Quantum utility -- definition and assessment of a practical quantum advantage

    Full text link
    Several benchmarks have been proposed to holistically measure quantum computing performance. While some have focused on the end user's perspective (e.g., in application-oriented benchmarks), the real industrial value taking into account the physical footprint of the quantum processor are not discussed. Different use-cases come with different requirements for size, weight, power consumption, or data privacy while demanding to surpass certain thresholds of fidelity, speed, problem size, or precision. This paper aims to incorporate these characteristics into a concept coined quantum utility, which demonstrates the effectiveness and practicality of quantum computers for various applications where quantum advantage -- defined as either being faster, more accurate, or demanding less energy -- is achieved over a classical machine of similar size, weight, and cost. To successively pursue quantum utility, a level-based classification scheme -- constituted as application readiness levels (ARLs) -- as well as extended classification labels are introduced. These are demonstratively applied to different quantum applications from the fields of quantum chemistry, quantum simulation, quantum machine learning, and data analysis followed by a brief discussion

    Automatic Refactoring for Renamed Clones in Test Code

    Get PDF
    Unit testing plays an essential role in software development and maintenance, especially in Test-Driven Development. Conventional unit tests, which have no input parameters, often exercise similar scenarios with small variations to achieve acceptable coverage, which often results in duplicated code in test suites. Test code duplication hinders comprehension of test cases and maintenance of test suites. Test refactoring is a potential tool for developers to use to control technical debt arising due to test cloning. In this thesis, we present a novel tool, JTestParametrizer, for automatically refactoring method-scope renamed clones in test suites. We propose three levels of refactoring to parameterize type, data, and behaviour differences in clone pairs. Our technique works at the Abstract Syntax Tree level by extracting a parameterized template utility method and instantiating it with appropriate parameter values. We applied our technique to 5 open-source Java benchmark projects and conducted an empirical study on our results. Our technique examined 14,431 test methods in our benchmark projects and identified 415 renamed clone pairs as effective candidates for refactoring. On average, 65% of the effective candidates (268 clone pairs) in our test suites are refactorable using our technique. All of the refactored test methods are compilable, and 94% of them pass when executed as tests. We believe that our proposed refactorings generally improve code conciseness, reduce the amount of duplication, and make test suites easier to maintain and extend

    Timed Transition Automata as Numerical Planning Domain

    Get PDF
    A general technique for transforming a timed finite state automaton into an equivalent automated planning domain based on a numerical parameter model is introduced. Timed transition automata have many applications in control systems and agents models; they are used to describe sequential processes, where actions are labelling by automaton transitions subject to temporal constraints. The language of timed words accepted by a timed automaton, the possible sequences of system or agent behaviour, can be described in term of an appropriate planning domain encapsulating the timed actions patterns and constraints. The time words recognition problem is then posed as a planning problem where the goal is to reach a final state by a sequence of actions, which corresponds to the timed symbols labeling the automaton transitions. The transformation is proved to be correct and complete and it is space/time linear on the automaton size. Experimental results shows that the performance of the planning domain obtained by transformation is scalable for real world applications. A major advantage of the planning based approach, beside of the solving the parsing problem, is to represent in a single automated reasoning framework problems of plan recognitions, plan synthesis and plan optimisation
    corecore