58,790 research outputs found
Node coarsening calculi for program slicing
Several approaches to reverse and re-engineering are based upon program slicing. Unfortunately, for large systems, such as those which typically form the subject of reverse engineering activities, the space and time requirements of slicing can be a barrier to successful application. Faced with this problem, several authors have found it helpful to merge control flow graph (CFG) nodes, thereby improving the space and time requirements of standard slicing algorithms. The node-merging process essentially creates a 'coarser' version of the original CFG. The paper introduces a theory for defining control flow graph node coarsening calculi. The theory formalizes properties of interest, when coarsening is used as a precursor to program slicing. The theory is illustrated with a case study of a coarsening calculus, which is proved to have the desired properties of sharpness and consistency
Common Subexpression Elimination in a Lazy Functional Language
Common subexpression elimination is a well-known compiler optimisation that saves time by avoiding the repetition of the same computation. To our knowledge it has not yet been applied to lazy functional programming languages, although there are several advantages. First, the referential transparency of these languages makes the identification of common subexpressions very simple. Second, more common subexpressions can be recognised because they can be of arbitrary type whereas standard common subexpression elimination only shares primitive values. However, because lazy functional languages decouple program structure from data space allocation and control flow, analysing its effects and deciding under which conditions the elimination of a common subexpression is beneficial proves to be quite difficult. We developed and implemented the transformation for the language Haskell by extending the Glasgow Haskell compiler and measured its effectiveness on real-world programs
Program simplification as a means of approximating undecidable propositions
We describe an approach which mixes testing, slicing, transformation and formal verification to investigate speculative hypotheses concerning a program, formulated during program comprehension activity. Our philosophy is that such hypotheses (which are typically undecidable) can, in some sense, be `answered' by a partly automated system which returns neither `true' nor `false' but a program (the `test program') which computes the answer. The motivation for this philosophy is the way in which, as we demonstrate, static analysis and manipulation technology can be applied to ensure that the resulting test program is significantly simpler than the original program, thereby simplifying the process of investigating the original hypothesi
Robust Hyperproperty Preservation for Secure Compilation (Extended Abstract)
We map the space of soundness criteria for secure compilation based on the
preservation of hyperproperties in arbitrary adversarial contexts, which we
call robust hyperproperty preservation. For this, we study the preservation of
several classes of hyperproperties and for each class we propose an equivalent
"property-free" characterization of secure compilation that is generally better
tailored for proofs. Even the strongest of our soundness criteria, the robust
preservation of all hyperproperties, seems achievable for simple
transformations and provable using context back-translation techniques
previously developed for showing fully abstract compilation. While proving the
robust preservation of hyperproperties that are not safety requires such
powerful context back-translation techniques, for preserving safety
hyperproperties robustly, translating each finite trace prefix back to a source
context seems to suffice.Comment: PriSC'18 final versio
Safe Concurrency Introduction through Slicing
Traditional refactoring is about modifying the structure of existing code without changing its behaviour, but with the aim of making code easier to understand, modify, or reuse. In this paper, we introduce three novel refactorings for retrofitting concurrency to Erlang applications, and demonstrate how the use of program slicing makes the automation of these refactorings possible
Evaluation of Kermeta for Solving Graph-based Problems
Kermeta is a meta-language for specifying the structure and behavior of graphs of interconnected objects called models. In this paper,\ud
we show that Kermeta is relatively suitable for solving three graph-based\ud
problems. First, Kermeta allows the specification of generic model\ud
transformations such as refactorings that we apply to different metamodels\ud
including Ecore, Java, and Uml. Second, we demonstrate the extensibility\ud
of Kermeta to the formal language Alloy using an inter-language model\ud
transformation. Kermeta uses Alloy to generate recommendations for\ud
completing partially specified models. Third, we show that the Kermeta\ud
compiler achieves better execution time and memory performance compared\ud
to similar graph-based approaches using a common case study. The\ud
three solutions proposed for those graph-based problems and their\ud
evaluation with Kermeta according to the criteria of genericity,\ud
extensibility, and performance are the main contribution of the paper.\ud
Another contribution is the comparison of these solutions with those\ud
proposed by other graph-based tools
Proving Non-Termination via Loop Acceleration
We present the first approach to prove non-termination of integer programs
that is based on loop acceleration. If our technique cannot show
non-termination of a loop, it tries to accelerate it instead in order to find
paths to other non-terminating loops automatically. The prerequisites for our
novel loop acceleration technique generalize a simple yet effective
non-termination criterion. Thus, we can use the same program transformations to
facilitate both non-termination proving and loop acceleration. In particular,
we present a novel invariant inference technique that is tailored to our
approach. An extensive evaluation of our fully automated tool LoAT shows that
it is competitive with the state of the art
FORTEST: Formal methods and testing
Formal methods have traditionally been used for specification and development of software. However there are potential benefits for the testing stage as well. The panel session associated with this paper explores the usefulness
or otherwise of formal methods in various contexts for improving software testing. A number of different possibilities for the use of formal methods are explored and questions raised. The contributors are all members of the UK FORTEST Network on formal methods and testing. Although
the authors generally believe that formal methods
are useful in aiding the testing process, this paper is intended to provoke discussion. Dissenters are encouraged to put their views to the panel or individually to the authors
- …