3,139 research outputs found
Cost analysis of object-oriented bytecode programs
AbstractCost analysis statically approximates the cost of programs in terms of their input data size. This paper presents, to the best of our knowledge, the first approach to the automatic cost analysis of object-oriented bytecode programs. In languages such as Java and C#, analyzing bytecode has a much wider application area than analyzing source code since the latter is often not available. Cost analysis in this context has to consider, among others, dynamic dispatch, jumps, the operand stack, and the heap. Our method takes a bytecode program and a cost model specifying the resource of interest, and generates cost relations which approximate the execution cost of the program with respect to such resource. We report on COSTA, an implementation for Java bytecode which can obtain upper bounds on cost for a large class of programs and complexity classes. Our basic techniques can be directly applied to infer cost relations for other object-oriented imperative languages, not necessarily in bytecode form
Termination and Cost Analysis with COSTA and its User Interfaces
COSTA is a static analyzer for Java bytecode which is able to infer cost and termination information for large classes of programs. The analyzer takes as input a program and a resource of interest, in the form of a cost model, and aims at obtaining an upper bound on the execution cost with respect to the resource and at proving program termination. The costa system has reached a considerable degree of maturity in that (1) it includes state-of-the-art techniques for statically estimating the resource consumption and the termination behavior of programs, plus a number of specialized techniques which are required for achieving accurate results in the context of object-oriented programs, such as handling numeric fields in value analysis; (2) it provides several nontrivial notions of cost (resource consumption) including, in addition to the number of execution steps, the amount of memory allocated in the heap or the number of calls to some user-specified method; (3) it provides several user interfaces: a classical command line, a Web interface which allows experimenting remotely with the system without the need of installing it locally, and a recently developed Eclipse plugin which facilitates the usage of the analyzer, even during the development phase; (4) it can deal with both the Standard and Micro editions of Java. In the tool demonstration, we will show that costa is able to produce meaningful results for non-trivial programs, possibly using Java libraries. Such results can then be used in many applications, including program development, resource usage certification, program optimization, etc
Live Heap Space Analysis for Languages with Garbage Collection
The peak heap consumption of a program is the maximum size of the live data on the heap during the execution of the program, i.e., the minimum amount of heap space needed to run the program without exhausting the memory. It is well-known that garbage collection (GC) makes the problem of predicting the memory required to run a program difficult. This paper presents, the best of our knowledge, the first live heap space analysis for garbage-collected languages which infers accurate upper bounds on the peak heap usage of a program’s execution that are not restricted to any complexity class, i.e., we can infer exponential, logarithmic, polynomial, etc., bounds. Our analysis is developed for an (sequential) object-oriented bytecode language with a scoped-memory manager that reclaims unreachable memory when methods return. We also show how our analysis can accommodate other GC schemes which are closer to the ideal GC which collects objects as soon as they become unreachable. The practicality of our approach is experimentally evaluated on a prototype implementation.We demonstrate that it is fully automatic, reasonably accurate and efficient by inferring live heap space bounds for a standardized set of benchmarks, the JOlden suite
Verification of Java Bytecode using Analysis and Transformation of Logic Programs
State of the art analyzers in the Logic Programming (LP) paradigm are
nowadays mature and sophisticated. They allow inferring a wide variety of
global properties including termination, bounds on resource consumption, etc.
The aim of this work is to automatically transfer the power of such analysis
tools for LP to the analysis and verification of Java bytecode (JVML). In order
to achieve our goal, we rely on well-known techniques for meta-programming and
program specialization. More precisely, we propose to partially evaluate a JVML
interpreter implemented in LP together with (an LP representation of) a JVML
program and then analyze the residual program. Interestingly, at least for the
examples we have studied, our approach produces very simple LP representations
of the original JVML programs. This can be seen as a decompilation from JVML to
high-level LP source. By reasoning about such residual programs, we can
automatically prove in the CiaoPP system some non-trivial properties of JVML
programs such as termination, run-time error freeness and infer bounds on its
resource consumption. We are not aware of any other system which is able to
verify such advanced properties of Java bytecode
Test Case Generation for Object-Oriented Imperative Languages in CLP
Testing is a vital part of the software development process. Test Case
Generation (TCG) is the process of automatically generating a collection of
test cases which are applied to a system under test. White-box TCG is usually
performed by means of symbolic execution, i.e., instead of executing the
program on normal values (e.g., numbers), the program is executed on symbolic
values representing arbitrary values. When dealing with an object-oriented (OO)
imperative language, symbolic execution becomes challenging as, among other
things, it must be able to backtrack, complex heap-allocated data structures
should be created during the TCG process and features like inheritance, virtual
invocations and exceptions have to be taken into account. Due to its inherent
symbolic execution mechanism, we pursue in this paper that Constraint Logic
Programming (CLP) has a promising unexploited application field in TCG. We will
support our claim by developing a fully CLP-based framework to TCG of an OO
imperative language, and by assessing it on a corresponding implementation on a
set of challenging Java programs. A unique characteristic of our approach is
that it handles all language features using only CLP and without the need of
developing specific constraint operators (e.g., to model the heap)
Test Data Generation of Bytecode by CLP Partial Evaluation
We employ existing partial evaluation (PE) techniques developed for Constraint Logic Programming (CLP) in order to automatically generate test-case generators for glass-box testing of bytecode. Our approach consists of two independent CLP PE phases. (1) First, the bytecode is transformed into an equivalent (decompiled) CLP program. This is already a well studied transformation which can be done either by using an ad-hoc decompiler or by specialising a bytecode interpreter by means of existing PE techniques. (2) A second PE is performed in order to supervise the generation of test-cases by execution of the CLP decompiled program. Interestingly, we employ control strategies previously defined in the context of CLP PE in order to capture coverage criteria for glass-box testing of bytecode. A unique feature of our approach is that, this second PE phase allows generating not only test-cases but also test-case generators. To the best of our knowledge, this is the first time that (CLP) PE techniques are applied for test-case generation as well as to generate test-case generators
Algorithm Diversity for Resilient Systems
Diversity can significantly increase the resilience of systems, by reducing
the prevalence of shared vulnerabilities and making vulnerabilities harder to
exploit. Work on software diversity for security typically creates variants of
a program using low-level code transformations. This paper is the first to
study algorithm diversity for resilience. We first describe how a method based
on high-level invariants and systematic incrementalization can be used to
create algorithm variants. Executing multiple variants in parallel and
comparing their outputs provides greater resilience than executing one variant.
To prevent different parallel schedules from causing variants' behaviors to
diverge, we present a synchronized execution algorithm for DistAlgo, an
extension of Python for high-level, precise, executable specifications of
distributed algorithms. We propose static and dynamic metrics for measuring
diversity. An experimental evaluation of algorithm diversity combined with
implementation-level diversity for several sequential algorithms and
distributed algorithms shows the benefits of algorithm diversity
- …