43,899 research outputs found

    A general technique for automatically optimizing programs through the use of proof plans

    No full text
    The use of {\em proof plans} -- formal patterns of reasoning for theorem proving -- to control the (automatic) synthesis of efficient programs from standard definitional equations is described. A general framework for synthesizing efficient programs, using tools such as higher-order unification, has been developed and holds promise for encapsulating an otherwise diverse, and often ad hoc, range of transformation techniques. A prototype system has been implemented. We illustrate the methodology by a novel means of affecting {\em constraint-based} program optimization through the use of proof plans for mathematical induction. Proof plans are used to control the (automatic) synthesis of functional programs, specified in a standard equational form, {E\cal E}, by using the proofs as programs principle. The goal is that the program extracted from a constructive proof of the specification is an optimization of that defined solely by {E\cal E}. Thus the theorem proving process is a form of program optimization allowing for the construction of an efficient, {\em target}, program from the definition of an inefficient, {\em source}, program. The general technique for controlling the syntheses of efficient programs involves using {E\cal E} to specify the target program and then introducing a new sub-goal into the proof of that specification. Different optimizations are achieved by placing different characterizing restrictions on the form of this new sub-goal and hence on the subsequent proof. Meta-variables and higher-order unification are used in a technique called {\em middle-out reasoning} to circumvent eureka steps concerning, amongst other things, the identification of recursive data-types, and unknown constraint functions. Such problems typically require user intervention

    Recursive Program Optimization Through Inductive Synthesis Proof Transformation

    Get PDF
    The research described in this paper involved developing transformation techniques which increase the efficiency of the noriginal program, the source, by transforming its synthesis proof into one, the target, which yields a computationally more efficient algorithm. We describe a working proof transformation system which, by exploiting the duality between mathematical induction and recursion, employs the novel strategy of optimizing recursive programs by transforming inductive proofs. We compare and contrast this approach with the more traditional approaches to program transformation, and highlight the benefits of proof transformation with regards to search, correctness, automatability and generality

    Optimizing compilation with preservation of structural code coverage metrics to support software testing

    Get PDF
    Code-coverage-based testing is a widely-used testing strategy with the aim of providing a meaningful decision criterion for the adequacy of a test suite. Code-coverage-based testing is also mandated for the development of safety-critical applications; for example, the DO178b document requires the application of the modified condition/decision coverage. One critical issue of code-coverage testing is that structural code coverage criteria are typically applied to source code whereas the generated machine code may result in a different code structure because of code optimizations performed by a compiler. In this work, we present the automatic calculation of coverage profiles describing which structural code-coverage criteria are preserved by which code optimization, independently of the concrete test suite. These coverage profiles allow to easily extend compilers with the feature of preserving any given code-coverage criteria by enabling only those code optimizations that preserve it. Furthermore, we describe the integration of these coverage profile into the compiler GCC. With these coverage profiles, we answer the question of how much code optimization is possible without compromising the error-detection likelihood of a given test suite. Experimental results conclude that the performance cost to achieve preservation of structural code coverage in GCC is rather low.Peer reviewedSubmitted Versio

    Computational complexity of Ī¼ calculation

    Get PDF
    The structured singular value Ī¼ measures the robustness of uncertain systems. Numerous researchers over the last decade have worked on developing efficient methods for computing Ī¼. This paper considers the complexity of calculating Ī¼ with general mixed real/complex uncertainty in the framework of combinatorial complexity theory. In particular, it is proved that the Ī¼ recognition problem with either pure real or mixed real/complex uncertainty is NP-hard. This strongly suggests that it is futile to pursue exact methods for calculating Ī¼ of general systems with pure real or mixed uncertainty for other than small problems
    • ā€¦
    corecore