2,408 research outputs found
A static cost analysis for a higher-order language
We develop a static complexity analysis for a higher-order functional
language with structural list recursion. The complexity of an expression is a
pair consisting of a cost and a potential. The former is defined to be the size
of the expression's evaluation derivation in a standard big-step operational
semantics. The latter is a measure of the "future" cost of using the value of
that expression. A translation function tr maps target expressions to
complexities. Our main result is the following Soundness Theorem: If t is a
term in the target language, then the cost component of tr(t) is an upper bound
on the cost of evaluating t. The proof of the Soundness Theorem is formalized
in Coq, providing certified upper bounds on the cost of any expression in the
target language.Comment: Final versio
Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"
According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient.
The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself.
Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners.
• The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another.
• The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion.
The behaviour of the entities may vary over time.
• The systems operate with incomplete information about the environment.
For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered.
The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems.
This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative.
We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration
Bounded Expectations: Resource Analysis for Probabilistic Programs
This paper presents a new static analysis for deriving upper bounds on the
expected resource consumption of probabilistic programs. The analysis is fully
automatic and derives symbolic bounds that are multivariate polynomials of the
inputs. The new technique combines manual state-of-the-art reasoning techniques
for probabilistic programs with an effective method for automatic
resource-bound analysis of deterministic programs. It can be seen as both, an
extension of automatic amortized resource analysis (AARA) to probabilistic
programs and an automation of manual reasoning for probabilistic programs that
is based on weakest preconditions. As a result, bound inference can be reduced
to off-the-shelf LP solving in many cases and automatically-derived bounds can
be interactively extended with standard program logics if the automation fails.
Building on existing work, the soundness of the analysis is proved with respect
to an operational semantics that is based on Markov decision processes. The
effectiveness of the technique is demonstrated with a prototype implementation
that is used to automatically analyze 39 challenging probabilistic programs and
randomized algorithms. Experimental results indicate that the derived constant
factors in the bounds are very precise and even optimal for many programs
Tight polynomial worst-case bounds for loop programs
In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple programming language - representing non-deterministic imperative programs with bounded loops, and arithmetics limited to addition and multiplication - it is possible to decide precisely whether a program has certain growth-rate properties, in particular whether a computed value, or the program's running time, has a polynomial growth rate. A natural and intriguing problem was to move from answering the decision problem to giving a quantitative result, namely, a tight polynomial upper bound. This paper shows how to obtain asymptotically-tight, multivariate, disjunctive polynomial bounds for this class of programs. This is a complete solution: whenever a polynomial bound exists it will be found. A pleasant surprise is that the algorithm is quite simple; but it relies on some subtle reasoning. An important ingredient in the proof is the forest factorization theorem, a strong structural result on homomorphisms into a finite monoid
Indexed Labels for Loop Iteration Dependent Costs
We present an extension to the labelling approach, a technique for lifting
resource consumption information from compiled to source code. This approach,
which is at the core of the annotating compiler from a large fragment of C to
8051 assembly of the CerCo project, looses preciseness when differences arise
as to the cost of the same portion of code, whether due to code transformation
such as loop optimisations or advanced architecture features (e.g. cache). We
propose to address this weakness by formally indexing cost labels with the
iterations of the containing loops they occur in. These indexes can be
transformed during the compilation, and when lifted back to source code they
produce dependent costs.
The proposed changes have been implemented in CerCo's untrusted prototype
compiler from a large fragment of C to 8051 assembly.Comment: In Proceedings QAPL 2013, arXiv:1306.241
- …