190 research outputs found

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL

    A General Framework for Static Profiling of Parametric Resource Usage

    Get PDF
    Traditional static resource analyses estimate the total resource usage of a program, without executing it. In this paper we present a novel resource analysis whose aim is instead the static profiling of accumulated cost, i.e., to discover, for selected parts of the program, an estimate or bound of the resource usage accumulated in each of those parts. Traditional resource analyses are parametric in the sense that the results can be functions on input data sizes. Our static profiling is also parametric, i.e., our accumulated cost estimates are also parameterized by input data sizes. Our proposal is based on the concept of cost centers and a program transformation that allows the static inference of functions that return bounds on these accumulated costs depending on input data sizes, for each cost center of interest. Such information is much more useful to the software developer than the traditional resource usage functions, as it allows identifying the parts of a program that should be optimized, because of their greater impact on the total cost of program executions. We also report on our implementation of the proposed technique using the CiaoPP program analysis framework, and provide some experimental results. This paper is under consideration for acceptance in TPLP.Comment: Paper presented at the 32nd International Conference on Logic Programming (ICLP 2016), New York City, USA, 16-21 October 2016, 22 pages, LaTe

    Towards Incremental and Modular Context-Sensitive Analysis

    Get PDF
    This is an extended abstract of [I. Garcia-Contreras et al., 2018]

    Towards Energy Consumption Verification via Static Analysis

    Full text link
    In this paper we leverage an existing general framework for resource usage verification and specialize it for verifying energy consumption specifications of embedded programs. Such specifications can include both lower and upper bounds on energy usage, and they can express intervals within which energy usage is to be certified to be within such bounds. The bounds of the intervals can be given in general as functions on input data sizes. Our verification system can prove whether such energy usage specifications are met or not. It can also infer the particular conditions under which the specifications hold. To this end, these conditions are also expressed as intervals of functions of input data sizes, such that a given specification can be proved for some intervals but disproved for others. The specifications themselves can also include preconditions expressing intervals for input data sizes. We report on a prototype implementation of our approach within the CiaoPP system for the XC language and XS1-L architecture, and illustrate with an example how embedded software developers can use this tool, and in particular for determining values for program parameters that ensure meeting a given energy budget while minimizing the loss in quality of service.Comment: Presented at HIP3ES, 2015 (arXiv: 1501.03064

    An Approach to Static Performance Guarantees for Programs with Run-time Checks

    Full text link
    Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express "admissible" overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.Comment: 15 pages, 3 tables; submitted to ICLP'18, accepted as technical communicatio

    Tridimensional N, P-Codoped Carbon Sponges as Highly Selective Catalysts for Aerobic Oxidative Coupling of Benzylamine

    Full text link
    [EN] Two tridimensional N-doped porous carbon sponges (3DC-X) have been prepared by using cetyltrimethylammonium chloride ( CTAC) and cetyltrimethylammonium bromide (CTAB) as soft templates and alginate to replicate the liquid crystals formed by CTA+ in water. Alginate is a filmogenic polysaccharide of natural origin having the ability to form nanometric defectless films around objects. Subsequent pyrolysis at 900 degrees C under an Ar flow of the resulting CTA(+)-polysaccharide assemblies result in 3DC-1 and 3DC-2, with the N percentages of 0.48 and 0.36 wt % for the materials resulting from CTAC and CTAB, respectively. Another four 3DC materials were obtained via pyrolysis of the adduct of phytic acid and chitosan, rendering an amorphous, N and P-codoped carbon sample (3DC-3 to 3DC-6). The six 3DC samples exhibit a large area (>650 m(2) x g(-1)) and porosity, as determined by Ar adsorption. The catalytic activity of these materials in promoting the aerobic oxidation of benzylamine increases with the specific surface area and doping, being the largest for 3DC-4, which is able to achieve 73% benzylamine conversion to N-benzylidene benzylamine in solventless conditions at 70 degrees C in 5 h. Quenching studies and hot filtration tests indicate that 3DC-4 acts as a heterogeneous catalyst rather than an initiator, triggering the formation of hydroperoxyl and hydroxyl radicals as the main reactive oxygen species involved in the aerobic oxidation.Financial support by the Spanish Ministry of Science and Innovation (Severo Ochoa 2016 and RTI2018-890237-CO2-R1) and Generalitat Valenciana is gratefully acknowledged. A.P. thanks the Spanish Ministry for a Ramon y Cajal Research associate contract. A.D. is thankful to the University Grants Commission, New Dekhi, for awarding Assistant Professorship through the Faculty Recharge Programme.Peng, L.; Garcia-Baldovi, H.; Dhakshinamoorthy, A.; Primo Arnau, AM.; GarcĂ­a GĂłmez, H. (2022). Tridimensional N, P-Codoped Carbon Sponges as Highly Selective Catalysts for Aerobic Oxidative Coupling of Benzylamine. ACS Omega. 7(13):11092-11100. https://doi.org/10.1021/acsomega.1c07179110921110071

    Inferring Energy Bounds via Static Program Analysis and Evolutionary Modeling of Basic Blocks

    Full text link
    The ever increasing number and complexity of energy-bound devices (such as the ones used in Internet of Things applications, smart phones, and mission critical systems) pose an important challenge on techniques to optimize their energy consumption and to verify that they will perform their function within the available energy budget. In this work we address this challenge from the software point of view and propose a novel parametric approach to estimating tight bounds on the energy consumed by program executions that are practical for their application to energy verification and optimization. Our approach divides a program into basic (branchless) blocks and estimates the maximal and minimal energy consumption for each block using an evolutionary algorithm. Then it combines the obtained values according to the program control flow, using static analysis, to infer functions that give both upper and lower bounds on the energy consumption of the whole program and its procedures as functions on input data sizes. We have tested our approach on (C-like) embedded programs running on the XMOS hardware platform. However, our method is general enough to be applied to other microprocessor architectures and programming languages. The bounds obtained by our prototype implementation can be tight while remaining on the safe side of budgets in practice, as shown by our experimental evaluation.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854). Improved version of the one presented at the HIP3ES 2016 workshop (v1): more experimental results (added benchmark to Table 1, added figure for new benchmark, added Table 3), improved Fig. 1, added Fig.

    Improving the efficiency of nondeterministic indepemndent and-parallel systems

    Get PDF
    We present the design and implementation of the and-parallel component of ACE. ACE is a computational model for the full Prolog language that simultaneously exploits both or-parallelism and independent and-parallelism. A high performance implementation of the ACE model has been realized and its performance reported in this paper. We discuss how some of the standard problems which appear when implementing and-parallel systems are solved in ACE. We then propose a number of optimizations aimed at reducing the overheads and the increased memory consumption which occur in such systems when using previously proposed solutions. Finally, we present results from an implementation of ACE which includes the optimizations proposed. The results show that ACE exploits and-parallelism with high efficiency and high speedups. Furthermore, they also show that the proposed optimizations, which are applicable to many other and-parallel systems, significantly decrease memory consumption and increase speedups and absolute performance both in forwards execution and during backtracking
    • …
    corecore