4 research outputs found
Improvements for Free
"Theorems for Free!" (Wadler, FPCA 1989) is a slogan for a technique that
allows to derive statements about functions just from their types. So far, the
statements considered have always had a purely extensional flavor: statements
relating the value semantics of program expressions, but not statements
relating their runtime (or other) cost. Here we study an extension of the
technique that allows precisely statements of the latter flavor, by deriving
quantitative theorems for free. After developing the theory, we walk through a
number of example derivations. Probably none of the statements derived in those
simple examples will be particularly surprising to most readers, but what is
maybe surprising, and at the very least novel, is that there is a general
technique for obtaining such results on a quantitative level in a principled
way. Moreover, there is good potential to bring that technique to bear on more
complex examples as well. We turn our attention to short-cut fusion (Gill et
al., FPCA 1993) in particular.Comment: In Proceedings QAPL 2011, arXiv:1107.074
Automatic Time-Bound Analysis for High-Level Languages
Thesis (PhD) - Indiana University, Computer Sciences, 2006Analysis of program running time is important for reactive systems, interactive environments, compiler optimizations, performance evaluation, and many other computer applications. Automatic and efficient prediction of accurate time bounds is particularly important, and being able to do so for high-level languages is particularly desirable. This dissertation presents a general approach for automatic and accurate time-bound analysis for high-level languages, combining methods and techniques studied in theory, languages, and systems. The approach consists of transformations for building time-bound functions in the presence of partially known input structures, symbolic evaluation of the time-bound function based on input parameters, optimizations to make the analysis efficient as well as accurate, and measurements of primitive parameters, all at the source-language level. We describe analysis and transformation algorithms and explain how they work. We have implemented this approach and performed a large number of experiments analyzing Scheme programs. The measured worst-case times are closely bounded by the calculated bounds. We describe our prototype system, ALPA, as well as the analysis and measurement results
Free Theorems in Languages with Real-World Programming Features
Free theorems, type-based assertions about functions, have become a prominent reasoning tool in functional programming languages. But their correct application requires a lot of care. Restrictions arise due to features present in implemented such languages, but not in the language free theorems were originally investigated in. This thesis advances the formal theory behind free theorems w.r.t. the application of such theorems in non-strict functional languages such as Haskell. In particular, the impact of general recursion and forced strict evaluation is investigated. As formal ground, we employ different lambda calculi equipped with a denotational semantics. For a language with general recursion, we develop and implement a counterexample generator that tells if and why restrictions on a certain free theorem arise due to general recursion. If a restriction is necessary, the generator provides a counterexample to the unrestricted free theorem. If not, the generator terminates without returning a counterexample. Thus, we may on the one hand enhance the understanding of restrictions and on the other hand point to cases where restrictions are superfluous. For a language with a strictness primitive, we develop a refined type system that allows to localize the impact of forced strict evaluation. Refined typing results in stronger free theorems and therefore increases the value of the theorems. Moreover, we provide a generator for such stronger theorems. Lastly, we broaden the view on the kind of assertions free theorems provide. For a very simple, strict evaluated, calculus, we enrich free theorems by (runtime) efficiency assertions. We apply the theory to several toy examples. Finally, we investigate the performance gain of the foldr/build program transformation. The latter investigation exemplifies the main application of our theory: Free theorems may not only ensure semantic correctness of program transformations, they may also ensure that a program transformation speeds up a program.Freie Theoreme sind typbasierte Aussagen ĂŒber Funktionen. Sie dienen als beliebtes Hilfsmittel fĂŒr gleichungsbasiertes SchlieĂen in funktionalen Sprachen. Jedoch erfordert ihre korrekte Verwendung viel Sorgfalt. Bestimmte Sprachkonstrukte in praxisorientierten Programmiersprachen beschrĂ€nken freie Theoreme. AnfĂ€ngliche theoretische Arbeiten diskutieren diese EinschrĂ€nkungen nicht oder nur teilweise, da sie nur einen reduzierten Sprachumfang betrachten. In dieser Arbeit wird die Theorie freier Theoreme weiterentwickelt. Im Vordergrund steht die Verbesserung der Anwendbarkeit solcher Theoreme in praxisorientierten, ânicht-striktâ auswertenden, funktionalen Programmiersprachen, wie Haskell. Dazu ist eine Erweiterung des formalen Fundaments notwendig. Insbesondere werden die Auswirkungen von allgemeiner Rekursion und selektiv strikter Auswertung untersucht. Als Ausgangspunkt fĂŒr die Untersuchungen dient jeweils ein mit einer denotationellen Semantik ausgestattetes Lambda-KalkĂŒl. Im Falle allgemeiner Rekursion wird ein Gegenbeispielgenerator entwickelt und implementiert. Ziel ist es zu zeigen ob und warum allgemeine Rekursion bestimmte EinschrĂ€nkungen verursacht. Wird die Notwendigkeit einer EinschrĂ€nkung festgestellt, liefert der Generator ein Gegenbeispiel zum unbeschrĂ€nkten Theorem. Sonst terminiert er ohne ein Beispiel zu liefern. Auf der einen Seite erhöht der Generator somit das VerstĂ€ndnis fĂŒr BeschrĂ€nkungen. Auf der anderen Seite deutet er an, dass BeschrĂ€nkungen teils ĂŒberflĂŒssig sind. BezĂŒglich selektiv strikter Auswertung wird in dieser Arbeit ein verfeinertes Typsystem entwickelt, das den Einfluss solcher vom Programmierer erzwungener Auswertung auf freie Theoreme lokal begrenzt. Verfeinerte Typen ermöglichen stĂ€rkere, und somit fĂŒr die Anwendung wertvollere, freie Theoreme. Durch einen online verfĂŒgbaren Generator stehen die Theoreme faktisch aufwandsfrei zur VerfĂŒgung. AbschlieĂend wird der Blick auf die Art von Aussagen, die freie Theoreme liefern können, erweitert. FĂŒr ein sehr einfaches, strikt auswertendes, KalkĂŒl werden freie Theoreme mit Aussagen ĂŒber Programmeffizienz bzgl. der Laufzeit angereichert. Die Anwendbarkeit der Theorie wird an einigen sehr einfachen Beispielen verifiziert. Danach wird die Auswirkung der foldr/build- Programmtransformation auf die Programmlaufzeit betrachtet. Diese Betrachtung steckt das Anwendungsziel ab: Freie Theoreme sollen nicht nur die semantische Korrektheit von Programmtransformationen verifizieren, sie sollen auĂerdem zeigen, wann Transformationen die Performanz eines Programms erhöhen
Type-based amortized stack memory prediction
Controlling resource usage is important for the reliability, efficiency and security of
software systems. Automated analyses for bounding resource usage can be invaluable
tools for ensuring these properties.
Hofmann and Jost have developed an automated static analysis for finding linear
heap space bounds in terms of the input size for programs in a simple functional programming
language. Memory requirements are amortized by representing them as a
requirement for an abstract quantity, potential, which is supplied by assigning potential
to data structures in proportion to their size. This assignment is represented by annotations
on their types. The type system then ensures that all potential requirements can
be met from the original inputâs potential if a set of linear constraints can be solved.
Linear programming can optimise this amount of potential subject to the constraints,
yielding a upper bound on the memory requirements.
However, obtaining bounds on the heap space requirements does not detect a faulty
or malicious program which uses excessive stack space.
In this thesis, we investigate extending Hofmann and Jostâs techniques to infer
bounds on stack space usage, first by examining two approaches: using the Hofmann-
Jost analysis unchanged by applying a CPS transformation to the program being analysed,
then showing that this predicts the stack space requirements of the original program;
and directly adapting the analysis itself, which we will show is more practical.
We then consider how to deal with the different allocation patterns stack space
usage presents. In particular, the temporary nature of stack allocation leads us to a
system where we calculate the total potential after evaluating an expression in terms
of assignments of potential to the variables appearing in the expression as well as the
result. We also show that this analysis subsumes our previous systems, and improves
upon them.
We further increase the precision of the bounds inferred by noting the importance
of expressing stack memory bounds in terms of the depth of data structures and by
taking the maximum of the usage bounds of subexpressions. We develop an analysis
which uses richer definitions of the potential calculation to allow depth and maxima to
be used, albeit with a more subtle inference process