126,956 research outputs found

    Quasi-interpretations a way to control resources

    Get PDF
    International audienceThis paper presents in a reasoned way our works on resource analysis by quasi- interpretations. The controlled resources are typically the runtime, the runspace or the size of a result in a program execution. Quasi-interpretations allow analyzing system complexity. A quasi-interpretation is a numerical assignment, which provides an upper bound on computed func- tions and which is compatible with the program operational semantics. Quasi- interpretation method offers several advantages: (i) It provides hints in order to optimize an execution, (ii) it gives resource certificates, and (iii) finding quasi- interpretations is decidable for a broad class which is relevant for feasible com- putations. By combining the quasi-interpretation method with termination tools (here term orderings), we obtained several characterizations of complexity classes starting from Ptime and Pspace

    Quasi-friendly sup-interpretations

    Get PDF
    In a previous paper, the sup-interpretation method was proposed as a new tool to control memory resources of first order functional programs with pattern matching by static analysis. Basically, a sup-interpretation provides an upper bound on the size of function outputs. In this former work, a criterion, which can be applied to terminating as well as non-terminating programs, was developed in order to bound polynomially the stack frame size. In this paper, we suggest a new criterion which captures more algorithms computing values polynomially bounded in the size of the inputs. Since this work is related to quasi-interpretations, we compare the two notions obtaining two main features. The first one is that, given a program, we have heuristics for finding a sup-interpretation when we consider polynomials of bounded degree. The other one consists in the characterizations of the set of function computable in polynomial time and in polynomial space

    Resource control of object-oriented programs

    Get PDF
    A sup-interpretation is a tool which provides an upper bound on the size of a value computed by some symbol of a program. Sup-interpretations have shown their interest to deal with the complexity of first order functional programs. For instance, they allow to characterize all the functions bitwise computable in Alogtime. This paper is an attempt to adapt the framework of sup-interpretations to a fragment of oriented-object programs, including distinct encodings of numbers through the use of constructor symbols, loop and while constructs and non recursive methods with side effects. We give a criterion, called brotherly criterion, which ensures that each brotherly program computes objects whose size is polynomially bounded by the inputs sizes

    Resource Control for Synchronous Cooperative Threads

    Get PDF
    We develop new methods to statically bound the resources needed for the execution of systems of concurrent, interactive threads. Our study is concerned with a \emph{synchronous} model of interaction based on cooperative threads whose execution proceeds in synchronous rounds called instants. Our contribution is a system of compositional static analyses to guarantee that each instant terminates and to bound the size of the values computed by the system as a function of the size of its parameters at the beginning of the instant. Our method generalises an approach designed for first-order functional languages that relies on a combination of standard termination techniques for term rewriting systems and an analysis of the size of the computed values based on the notion of quasi-interpretation. We show that these two methods can be combined to obtain an explicit polynomial bound on the resources needed for the execution of the system during an instant. As a second contribution, we introduce a virtual machine and a related bytecode thus producing a precise description of the resources needed for the execution of a system. In this context, we present a suitable control flow analysis that allows to formulte the static analyses for resource control at byte code level

    Synthesis of sup-interpretations: a survey

    Get PDF
    In this paper, we survey the complexity of distinct methods that allow the programmer to synthesize a sup-interpretation, a function providing an upper- bound on the size of the output values computed by a program. It consists in a static space analysis tool without consideration of the time consumption. Although clearly related, sup-interpretation is independent from termination since it only provides an upper bound on the terminating computations. First, we study some undecidable properties of sup-interpretations from a theoretical point of view. Next, we fix term rewriting systems as our computational model and we show that a sup-interpretation can be obtained through the use of a well-known termination technique, the polynomial interpretations. The drawback is that such a method only applies to total functions (strongly normalizing programs). To overcome this problem we also study sup-interpretations through the notion of quasi-interpretation. Quasi-interpretations also suffer from a drawback that lies in the subterm property. This property drastically restricts the shape of the considered functions. Again we overcome this problem by introducing a new notion of interpretations mainly based on the dependency pairs method. We study the decidability and complexity of the sup-interpretation synthesis problem for all these three tools over sets of polynomials. Finally, we take benefit of some previous works on termination and runtime complexity to infer sup-interpretations.Comment: (2012

    On Quasi-Interpretations, Blind Abstractions and Implicit Complexity

    Full text link
    Quasi-interpretations are a technique to guarantee complexity bounds on first-order functional programs: with termination orderings they give in particular a sufficient condition for a program to be executable in polynomial time, called here the P-criterion. We study properties of the programs satisfying the P-criterion, in order to better understand its intensional expressive power. Given a program on binary lists, its blind abstraction is the nondeterministic program obtained by replacing lists by their lengths (natural numbers). A program is blindly polynomial if its blind abstraction terminates in polynomial time. We show that all programs satisfying a variant of the P-criterion are in fact blindly polynomial. Then we give two extensions of the P-criterion: one by relaxing the termination ordering condition, and the other one (the bounded value property) giving a necessary and sufficient condition for a program to be polynomial time executable, with memoisation.Comment: 18 page

    Formalizing Termination Proofs under Polynomial Quasi-interpretations

    Full text link
    Usual termination proofs for a functional program require to check all the possible reduction paths. Due to an exponential gap between the height and size of such the reduction tree, no naive formalization of termination proofs yields a connection to the polynomial complexity of the given program. We solve this problem employing the notion of minimal function graph, a set of pairs of a term and its normal form, which is defined as the least fixed point of a monotone operator. We show that termination proofs for programs reducing under lexicographic path orders (LPOs for short) and polynomially quasi-interpretable can be optimally performed in a weak fragment of Peano arithmetic. This yields an alternative proof of the fact that every function computed by an LPO-terminating, polynomially quasi-interpretable program is computable in polynomial space. The formalization is indeed optimal since every polynomial-space computable function can be computed by such a program. The crucial observation is that inductive definitions of minimal function graphs under LPO-terminating programs can be approximated with transfinite induction along LPOs.Comment: In Proceedings FICS 2015, arXiv:1509.0282

    On Dollars and Deference: Agencies, Spending, and Economic Rights

    Get PDF
    Agencies can change society not just by prescribing conduct, but also by spending money. The Obama administration gave us two powerful examples of this phenomenon. To secure widespread access to affordable health insurance and affordable higher education, the administration took actions that were not required by statutory text. These entitlements are built upon a scaffolding of aggressive agency statutory interpretations, not upon clear legislative commands. This Article uses these two examples as case studies for evaluating the institutional competence of the executive branch to underwrite large-scale positive economic entitlements on the basis of ambiguous statutory authority. Such agency-initiated schemes may help improve the economic wellbeing and enhance the economic opportunity of millions of Americans. But, as these case studies reflect, the risks of such agency action are considerable. First, when the executive branch gives money away, Article III standing requirements will weaken the check of judicial review on administrative action. Second, agency creation of schemes for protecting economic entitlements may result in political and even legal entrenchment that could complicate or obstruct future lawmakers’ ability to undo those agency decisions. Third, the initiation of broad-scale government spending programs entails society-wide redistributive trade-offs that neither individual agencies, nor the executive branch as a whole, can properly make. In sum, this form of executive-branch action may advance important interests—interests in health, education, and economic equality and opportunity. But it may also corrode values that are at least equally important—most notably, the power of Congress to control the current and future financial obligations of the United States

    On Dollars and Deference: Agencies, Spending, and Economic Rights

    Get PDF
    Agencies can change society not just by prescribing conduct, but also by spending money. The Obama administration gave us two powerful examples of this phenomenon. To secure widespread access to affordable health insurance and affordable higher education, the administration took actions that were not required by statutory text. These entitlements are built upon a scaffolding of aggressive agency statutory interpretations, not upon clear legislative commands. This Article uses these two examples as case studies for evaluating the institutional competence of the executive branch to underwrite large-scale positive economic entitlements on the basis of ambiguous statutory authority. Such agency-initiated schemes may help improve the economic wellbeing and enhance the economic opportunity of millions of Americans. But, as these case studies reflect, the risks of such agency action are considerable. First, when the executive branch gives money away, Article III standing requirements will weaken the check of judicial review on administrative action. Second, agency creation of schemes for protecting economic entitlements may result in political and even legal entrenchment that could complicate or obstruct future lawmakers’ ability to undo those agency decisions. Third, the initiation of broad-scale government spending programs entails society-wide redistributive trade-offs that neither individual agencies, nor the executive branch as a whole, can properly make. In sum, this form of executive-branch action may advance important interests—interests in health, education, and economic equality and opportunity. But it may also corrode values that are at least equally important—most notably, the power of Congress to control the current and future financial obligations of the United States
    • …
    corecore