3,548 research outputs found
Continuation-Passing C: compiling threads to events through continuations
In this paper, we introduce Continuation Passing C (CPC), a programming
language for concurrent systems in which native and cooperative threads are
unified and presented to the programmer as a single abstraction. The CPC
compiler uses a compilation technique, based on the CPS transform, that yields
efficient code and an extremely lightweight representation for contexts. We
provide a proof of the correctness of our compilation scheme. We show in
particular that lambda-lifting, a common compilation technique for functional
languages, is also correct in an imperative language like C, under some
conditions enforced by the CPC compiler. The current CPC compiler is mature
enough to write substantial programs such as Hekate, a highly concurrent
BitTorrent seeder. Our benchmark results show that CPC is as efficient, while
using significantly less space, as the most efficient thread libraries
available.Comment: Higher-Order and Symbolic Computation (2012). arXiv admin note:
substantial text overlap with arXiv:1202.324
Lambda-Dropping: Transforming Recursive Equations into Programs with Block Structure
Lambda-lifting a functional program transforms it into a set of recursiveequations. We present the symmetric transformation: lambda-dropping.Lambda-dropping a set of recursive equations restores blockstructure and lexical scope.For lack of scope, recursive equations must carry around all theparameters that any of their callees might possibly need. Both lambda-liftingand lambda-dropping thus require one to compute a transitiveclosure over the call graph:- for lambda-lifting: to establish the Def/Use path of each freevariable (these free variables are then added as parameters toeach of the functions in the call path);- for lambda-dropping: to establish the Def/Use path of each parameter(parameters whose use occurs in the same scope as theirdefinition do not need to be passed along in the call path).Without free variables, a program is scope-insensitive. Its blocks arethen free to float (for lambda-lifting) or to sink (for lambda-dropping)along the vertices of the scope tree.We believe lambda-lifting and lambda-dropping are interesting perse, both in principle and in practice, but our prime application is partialevaluation: except for MalmkjĂŠr and ĂrbĂŠk's case study presented atPEPM'95, most polyvariant specializers for procedural programs operateon recursive equations. To this end, in a pre-processing phase,they lambda-lift source programs into recursive equations. As a result,residual programs are also expressed as recursive equations, often withdozens of parameters, which most compilers do not handle efficiently.Lambda-dropping in a post-processing phase restores their block structureand lexical scope thereby significantly reducing both the compiletime and the run time of residual programs.
Free Theorems in Languages with Real-World Programming Features
Free theorems, type-based assertions about functions, have become a prominent reasoning tool in functional programming languages. But their correct application requires a lot of care. Restrictions arise due to features present in implemented such languages, but not in the language free theorems were originally investigated in. This thesis advances the formal theory behind free theorems w.r.t. the application of such theorems in non-strict functional languages such as Haskell. In particular, the impact of general recursion and forced strict evaluation is investigated. As formal ground, we employ different lambda calculi equipped with a denotational semantics. For a language with general recursion, we develop and implement a counterexample generator that tells if and why restrictions on a certain free theorem arise due to general recursion. If a restriction is necessary, the generator provides a counterexample to the unrestricted free theorem. If not, the generator terminates without returning a counterexample. Thus, we may on the one hand enhance the understanding of restrictions and on the other hand point to cases where restrictions are superfluous. For a language with a strictness primitive, we develop a refined type system that allows to localize the impact of forced strict evaluation. Refined typing results in stronger free theorems and therefore increases the value of the theorems. Moreover, we provide a generator for such stronger theorems. Lastly, we broaden the view on the kind of assertions free theorems provide. For a very simple, strict evaluated, calculus, we enrich free theorems by (runtime) efficiency assertions. We apply the theory to several toy examples. Finally, we investigate the performance gain of the foldr/build program transformation. The latter investigation exemplifies the main application of our theory: Free theorems may not only ensure semantic correctness of program transformations, they may also ensure that a program transformation speeds up a program.Freie Theoreme sind typbasierte Aussagen ĂŒber Funktionen. Sie dienen als beliebtes Hilfsmittel fĂŒr gleichungsbasiertes SchlieĂen in funktionalen Sprachen. Jedoch erfordert ihre korrekte Verwendung viel Sorgfalt. Bestimmte Sprachkonstrukte in praxisorientierten Programmiersprachen beschrĂ€nken freie Theoreme. AnfĂ€ngliche theoretische Arbeiten diskutieren diese EinschrĂ€nkungen nicht oder nur teilweise, da sie nur einen reduzierten Sprachumfang betrachten. In dieser Arbeit wird die Theorie freier Theoreme weiterentwickelt. Im Vordergrund steht die Verbesserung der Anwendbarkeit solcher Theoreme in praxisorientierten, ânicht-striktâ auswertenden, funktionalen Programmiersprachen, wie Haskell. Dazu ist eine Erweiterung des formalen Fundaments notwendig. Insbesondere werden die Auswirkungen von allgemeiner Rekursion und selektiv strikter Auswertung untersucht. Als Ausgangspunkt fĂŒr die Untersuchungen dient jeweils ein mit einer denotationellen Semantik ausgestattetes Lambda-KalkĂŒl. Im Falle allgemeiner Rekursion wird ein Gegenbeispielgenerator entwickelt und implementiert. Ziel ist es zu zeigen ob und warum allgemeine Rekursion bestimmte EinschrĂ€nkungen verursacht. Wird die Notwendigkeit einer EinschrĂ€nkung festgestellt, liefert der Generator ein Gegenbeispiel zum unbeschrĂ€nkten Theorem. Sonst terminiert er ohne ein Beispiel zu liefern. Auf der einen Seite erhöht der Generator somit das VerstĂ€ndnis fĂŒr BeschrĂ€nkungen. Auf der anderen Seite deutet er an, dass BeschrĂ€nkungen teils ĂŒberflĂŒssig sind. BezĂŒglich selektiv strikter Auswertung wird in dieser Arbeit ein verfeinertes Typsystem entwickelt, das den Einfluss solcher vom Programmierer erzwungener Auswertung auf freie Theoreme lokal begrenzt. Verfeinerte Typen ermöglichen stĂ€rkere, und somit fĂŒr die Anwendung wertvollere, freie Theoreme. Durch einen online verfĂŒgbaren Generator stehen die Theoreme faktisch aufwandsfrei zur VerfĂŒgung. AbschlieĂend wird der Blick auf die Art von Aussagen, die freie Theoreme liefern können, erweitert. FĂŒr ein sehr einfaches, strikt auswertendes, KalkĂŒl werden freie Theoreme mit Aussagen ĂŒber Programmeffizienz bzgl. der Laufzeit angereichert. Die Anwendbarkeit der Theorie wird an einigen sehr einfachen Beispielen verifiziert. Danach wird die Auswirkung der foldr/build- Programmtransformation auf die Programmlaufzeit betrachtet. Diese Betrachtung steckt das Anwendungsziel ab: Freie Theoreme sollen nicht nur die semantische Korrektheit von Programmtransformationen verifizieren, sie sollen auĂerdem zeigen, wann Transformationen die Performanz eines Programms erhöhen
Spartan Daily, January 5, 1966
Volume 53, Issue 57https://scholarworks.sjsu.edu/spartandaily/4810/thumbnail.jp
Simulation in the Call-by-Need Lambda-Calculus with Letrec, Case, Constructors, and Seq
This paper shows equivalence of several versions of applicative similarity
and contextual approximation, and hence also of applicative bisimilarity and
contextual equivalence, in LR, the deterministic call-by-need lambda calculus
with letrec extended by data constructors, case-expressions and Haskell's
seq-operator. LR models an untyped version of the core language of Haskell. The
use of bisimilarities simplifies equivalence proofs in calculi and opens a way
for more convenient correctness proofs for program transformations. The proof
is by a fully abstract and surjective transfer into a call-by-name calculus,
which is an extension of Abramsky's lazy lambda calculus. In the latter
calculus equivalence of our similarities and contextual approximation can be
shown by Howe's method. Similarity is transferred back to LR on the basis of an
inductively defined similarity. The translation from the call-by-need letrec
calculus into the extended call-by-name lambda calculus is the composition of
two translations. The first translation replaces the call-by-need strategy by a
call-by-name strategy and its correctness is shown by exploiting infinite trees
which emerge by unfolding the letrec expressions. The second translation
encodes letrec-expressions by using multi-fixpoint combinators and its
correctness is shown syntactically by comparing reductions of both calculi. A
further result of this paper is an isomorphism between the mentioned calculi,
which is also an identity on letrec-free expressions.Comment: 50 pages, 11 figure
La Salle College Bulletin Student Handbook 1967-1968
https://digitalcommons.lasalle.edu/student_handbooks/1023/thumbnail.jp
Strict Ideal Completions of the Lambda Calculus
The infinitary lambda calculi pioneered by Kennaway et al. extend the basic
lambda calculus by metric completion to infinite terms and reductions.
Depending on the chosen metric, the resulting infinitary calculi exhibit
different notions of strictness. To obtain infinitary normalisation and
infinitary confluence properties for these calculi, Kennaway et al. extend
-reduction with infinitely many `-rules', which contract
meaningless terms directly to . Three of the resulting B\"ohm reduction
calculi have unique infinitary normal forms corresponding to B\"ohm-like trees.
In this paper we develop a corresponding theory of infinitary lambda calculi
based on ideal completion instead of metric completion. We show that each of
our calculi conservatively extends the corresponding metric-based calculus.
Three of our calculi are infinitarily normalising and confluent; their unique
infinitary normal forms are exactly the B\"ohm-like trees of the corresponding
metric-based calculi. Our calculi dispense with the infinitely many
-rules of the metric-based calculi. The fully non-strict calculus (called
) consists of only -reduction, while the other two calculi (called
and ) require two additional rules that precisely state their
strictness properties: (for ) and (for and )
- âŠ