2,194 research outputs found
Combining norms to prove termination
Automatic termination analyzers typically measure the size of terms applying norms which are mappings from terms to the natural numbers. This paper illustrates how to enable the use of size functions defined as tuples of these simpler norm functions. This approach enables us to simplify the problem of deriving automatically a candidate norm with which to prove termination. Instead of deriving a single, complex norm function, it is sufficient to determine a collection of simpler norms, some combination of which, leads to a proof of termination. We propose that a collection of simple norms, one for each of the recursive data-types in the program, is often a suitable choice. We first demonstrate the power of combining norm functions and then the adequacy of combining norms based on regular-types
Abstract Program Slicing: an Abstract Interpretation-based approach to Program Slicing
In the present paper we formally define the notion of abstract program
slicing, a general form of program slicing where properties of data are
considered instead of their exact value. This approach is applied to a language
with numeric and reference values, and relies on the notion of abstract
dependencies between program components (statements).
The different forms of (backward) abstract slicing are added to an existing
formal framework where traditional, non-abstract forms of slicing could be
compared. The extended framework allows us to appreciate that abstract slicing
is a generalization of traditional slicing, since traditional slicing (dealing
with syntactic dependencies) is generalized by (semantic) non-abstract forms of
slicing, which are actually equivalent to an abstract form where the identity
abstraction is performed on data.
Sound algorithms for computing abstract dependencies and a systematic
characterization of program slices are provided, which rely on the notion of
agreement between program states
Consistency Checking of Natural Language Temporal Requirements using Answer-Set Programming
Successful software engineering practice requires high quality requirements. Inconsistency is one of the main requirement issues that may prevent software projects from being success. This is particularly onerous when the requirements concern temporal constraints. Manual checking whether temporal requirements are consistent is tedious and error prone when the number of requirements is large. This dissertation addresses the problem of identifying inconsistencies in temporal requirements expressed as natural language text. The goal of this research is to create an efficient, partially automated, approach for checking temporal consistency of natural language requirements and to minimize analysts\u27 workload.
The key contributions of this dissertation are as follows: (1) Development of a partially automated approach for checking temporal consistency of natural language requirements. (2) Creation of a formal language Temporal Action Language (TeAL), which provide a means to represent natural language requirements precisely and unambiguously. (3) Development of a front end to semi-automatically translate natural language requirements into TeAL. (4) Development of a translator from TeAL to the ASP language.
Validation results to date show that the front end tool makes the task of translating natural language requirements into TeAL more accurate and efficient, and the translator generates ASP programs that correctly detect the inconsistencies in the requirements
Analysing Parallel Complexity of Term Rewriting
We revisit parallel-innermost term rewriting as a model of parallel
computation on inductive data structures and provide a corresponding notion of
runtime complexity parametric in the size of the start term. We propose
automatic techniques to derive both upper and lower bounds on parallel
complexity of rewriting that enable a direct reuse of existing techniques for
sequential complexity. The applicability and the precision of the method are
demonstrated by the relatively light effort in extending the program analysis
tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Extended authors' accepted manuscript for a paper accepted for
publication in the Proceedings of the 32nd International Symposium on
Logic-based Program Synthesis and Transformation (LOPSTR 2022). 27 page
Type-based protocol conformance and aliasing control in concurrent java programs
Dissertação para obtenção do Grau de Mestre em
Engenharia InformáticaIn an object-oriented setting, objects are modeled by their state and operations. The
programmer should be aware of how each operation implicitly changes the state of an
object. This is due to the fact that in certain states some operations might not be available, e.g., reading from a file when it is closed. Additional care must be taken if we consider aliasing, since many references to the same object might be held and manipulated. This hinders the ability to identify the source of a modification to an object, thus making it harder to track down its state. These difficulties increase in a concurrent setting, due to the unpredictability of the behavior of concurrent programs.
Concurrent programs are complex and very hard to reason about and debug. Some of
the errors that arise in concurrent programs are due to simultaneous accesses to shared
memory by different threads, resulting in unpredictable outcomes due to the possible
execution interleavings. This kind of errors are generally known as race conditions.
Software verification and specification are important in software design and implementation as they provide early error detection, and can check conformity to a given specification, ensuring some intended correctness properties. To this end, our work
builds on the work of Spatial-Behavioral types formalism providing object ownership
support. Our approach consists in the integration of a behavioral type system, developed for a core fragment of the Java programming language, in the standard Java development process.PTDC/EIA-CCO/104583/2008 research scholarshi
A generic, context sensitive analysis framework for object oriented programs
Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce
a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental
results for standard benchmarks, which further support the
feasibility of the solution adopted
- …