11 research outputs found

    Towards Energy Consumption Verification via Static Analysis

    Full text link
    In this paper we leverage an existing general framework for resource usage verification and specialize it for verifying energy consumption specifications of embedded programs. Such specifications can include both lower and upper bounds on energy usage, and they can express intervals within which energy usage is to be certified to be within such bounds. The bounds of the intervals can be given in general as functions on input data sizes. Our verification system can prove whether such energy usage specifications are met or not. It can also infer the particular conditions under which the specifications hold. To this end, these conditions are also expressed as intervals of functions of input data sizes, such that a given specification can be proved for some intervals but disproved for others. The specifications themselves can also include preconditions expressing intervals for input data sizes. We report on a prototype implementation of our approach within the CiaoPP system for the XC language and XS1-L architecture, and illustrate with an example how embedded software developers can use this tool, and in particular for determining values for program parameters that ensure meeting a given energy budget while minimizing the loss in quality of service.Comment: Presented at HIP3ES, 2015 (arXiv: 1501.03064

    Inferring Parametric Energy Consumption Functions at Different Software Levels:ISA vs. LLVM IR

    Get PDF
    The static estimation of the energy consumed by program executions is an important challenge, which has applications in program optimization and verification, and is instrumental in energy-aware software development. Our objective is to estimate such energy consumption in the form of functions on the input data sizes of programs. We have developed a tool for experimentation with static analysis which infers such energy functions at two levels, the instruction set architecture (ISA) and the intermediate code (LLVM IR) levels, and re ects it upwards to the higher source code level. This required the development of a translation from LLVM IR to an intermediate representation and its integration with existing components, a translation from ISA to the same representation, a resource analyzer, an ISA-level energy model, and a mapping from this model to LLVM IR. The approach has been applied to programs written in the XC language running on XCore architectures, but is general enough to be applied to other languages. Experimental results show that our LLVM IR level analysis is reasonably accurate (less than 6:4% average error vs. hardware measurements) and more powerful than analysis at the ISA level. This paper provides insights into the trade-off of precision versus analyzability at these levels

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL

    Useful Open Call-By-Need

    Get PDF
    This paper studies useful sharing, which is a sophisticated optimization for ?-calculi, in the context of call-by-need evaluation in presence of open terms. Useful sharing turns out to be harder in call-by-need than in call-by-name or call-by-value, because call-by-need evaluates inside environments, making it harder to specify when a substitution step is useful. We isolate the key involved concepts and prove the correctness and the completeness of useful sharing in this setting

    The Verified CakeML Compiler Backend

    Get PDF
    The CakeML compiler is, to the best of our knowledge, the most realistic veri?ed compiler for a functional programming language to date. The architecture of the compiler, a sequence of intermediate languages through which high-level features are compiled away incrementally, enables veri?cation of each compilation pass at inappropriate level of semantic detail.Partsofthecompiler’s implementation resemble mainstream (unveri?ed) compilers for strict functional languages, and it support several important features and optimisations. These include ef?cient curried multi-argument functions, con?gurable data representations, ef?cient exceptions, register allocation,and more. The compiler produces machine code for ?ve architectures: x86-64, ARMv6, ARMv8, MIPS-64, and RISC-V. The generatedmachine code contains the veri?edruntime system which includes averi?ed generational copying garbage collect or and averi?edarbitraryprecisionarithmetic(bignum)library. In this paper we present the overall design of the compiler backend, including its 12 intermediate languages. We explain how the semantics and proofs ?t together, and provide detail on how the compiler has been bootstrapped inside the logic of a theorem prover. The entire development has been carried out within the HOL4 theorem prover

    Análisis de recursos de programas enteros y abstractos

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Sistemas lnformáticos y de Computación, leída el 27-05-2022Since the beginning of automated computing in the middle of the last century, the development of computer science has been linked to an increasing importance in all areas of the current society. The inclusion of computer science processes in everyday life and, in particular, its inclusion in critical situations, cannot go linked only to the generation of hardware and software, but also to the analysis and verification of all its components. While hardware analysis is crucial for the generation and maintenance of the computation infrastructure, as it is able to detect or predict components that can have a wrong behavior, software analysis focuses on analyzing the behavior of computer programs to address properties such as security, correctness or optimality. Depending on the type of analysis applied to the software, we can detect potential vulnerabilities in the code, find incorrect specifications, apply optimizations based on the maximun and minimun cost of the programs, calculate the resource consumption of a program..Desde el comienzo de la computación automática a mediados del siglo pasado, el avance de la informática ha ido ligado a una cada vez mayor importancia en todos los ámbitos d ela sociedad actual. La inclusión de procesos informáticos en la vida cotidiana y, en particular, su inclusión en situaciones críticas, no puede ir ligada solo a la generación del hardware el software, sino también al análisis y verificación de todos sus componentes. Mientras que el análisis de hardware es crucial para la generación de la infraestructura informática y el mantenimiento de la misma, detectando o prediciendo componentes que puedan funcionar de manera errónea, el análisis de software se enfoca hacia el análisis del comportamiento de los programas informáticos para abordar propiedades como la seguridad, la corrección o la optimalidad. Dependiendo del tipo de análisis aplicado al software, podremos detectar fragmentos de código potencialmente vulnerables, especificaciones incorrectas, aplicar optimizaciones en base al coste máximo y mínimo de los programas, calcular el consumo de recursos de un programa...Fac. de InformáticaTRUEunpu

    Probabilistic program analysis

    Get PDF
    corecore