109 research outputs found
Revisiting the Futamura Projections: A Diagrammatic Approach
The advent of language implementation tools such as PyPy and Truffle/Graal have reinvigorated and broadened interest in topics related to automatic compiler generation and optimization. Given this broader interest, we revisit the Futamura Projections using a novel diagram scheme. Through these diagrams we emphasize the recurring patterns in the Futamura Projections while addressing their complexity and abstract nature. We anticipate that this approach will improve the accessibility of the Futamura Projections and help foster analysis of those new tools through the lens of partial evaluation
Compiling Actions by Partial Evaluation, Revisited
We revisit Bondorf and Palsberg's compilation of actions using< the offline syntax-directed partial evaluator Similix (FPCA'93, JFP'96), and we compare it in detail with using an online type-directed partial evaluator. In contrast to Similix, our type-directed partial evaluator is idempotent and requires no "binding-time improvements." It also appears to consume about 7 times less space and to be about 28 times faster than Similix, and to yield residual programs that are perceptibly more efficient than those generated by Similix
Computation over partial information : a principled approach to accurate partial evaluation
On est habitué à penser comme suit à un programme qui exécute: une donnée entre (un input), un moment passe, et un résultat ressort. On assume tacitement de l'information complète sur le input, le résultat, et n'importe quels résultats intermédiaires.
Dans ce travail-ci, on demande ce que ça voudrait dire d'exécuter un programme sur de l'information partielle. Comme réponse possible, on introduit l'interprétation partielle, notre contribution principale. Au lieu de considérer un seul input, on considère un ensemble de inputs possibles. Au lieu de calculer un seul résultat, on calcule un ensemble de résultats possibles, et des ensembles de résultats intermédiaires possibles.
On approche l'interprétation partielle à partir du problème de la spécialisation de programme: l'optimisation d'un programme pour certains inputs. Faire ça automatiquement porte historiquement le nom d'évaluation partielle. Ç'a été appliqué avec succès à plusieurs problèmes spécifiques. On croit que ça devrait être un outil de programmation commun, pour spécialiser des librairies générales pour usage spécifique - mais ce n'est pas le cas.
Souvent, une implantation donnée de l'évaluation partielle ne fonctionne pas uniformément bien sur tous les programmes. Ça se prête mal à un usage commun. On voit ce manque de régularité comme un problème de précision: si l'évaluateur partiel était très précis, il trouverait la bonne spécialisation, indépendamment de notre style de programme.
On propose donc une approche de principe à l'évaluation partielle, visant la précision complète, retirée d'exemples particuliers. On reformule l'évaluation partielle pour la baser sur l'interprétation partielle: le calcul sur de l'information partielle. Si on peut déterminer ce qu'on sait sur chaque donnée dans le programme, on peut décider quelles opérations peuvent être éliminées pour spécialiser le programme: les opérations dont le résultat est unique.
On définit une représentation d'ensembles qui ressemble à la définition en compréhension, en mathématiques. On modifie un interpréteur pour des programmes fonctionnels, pour qu'il calcule sur ces ensembles. On utilise un solver SMT pour réaliser les opérations sur les ensembles. Pour assurer la terminaison de l'interpréteur modifié, on applique des idées de l'interprétation abstraite: le calcul de point fixe, et le widening. Notre implantation initiale produit de bons résultats, mais elle est lente pour de plus gros exemples. On montre comment l'accélérer mille fois, en dépendant moins de SMT.We are used to the following picture of an executing program: an input is provided, the program runs for a while, and a result comes out. We tacitly assume complete information about the input, the result, and any intermediate results in between.
In this work, we ask what it would mean to execute a program over partial information. As a possible answer, we introduce partial interpretation, our main contribution. Instead of considering a unique input, we consider a set of possible inputs. Instead of computing a unique result, we compute a set of possible results, and sets of possible intermediate results.
We approach partial interpretation from the problem of program specialization: the optimization of a program's execution time for certain inputs. Doing this automatically is historically known as partial evaluation. Partial evaluation has been applied successfully to many specific problems. We believe it should be a mainstream programming tool, to specialize general libraries for specific use - but such a tool has not been delivered.
One common problem is that a given implementation of partial evaluation is inconsistent: it does not work uniformly well on all input programs. This inconsistency makes it unsuited for mainstream use. We view this inconsistency as an accuracy problem: if the partial evaluator was very accurate, it would find the correct specialization, no matter how we present the input program.
We therefore propose a principled approach to partial evaluation, aimed at complete accuracy, removed from any particular example program. We reformulate partial evaluation to root it in partial interpretation: computation over partial information. If we can determine what we know about every piece of data in the program, we can decide which operations can be removed to specialize the program: those operations whose result is uniquely known.
We represent sets with a kind of mathematical set comprehension. We modify an interpreter for functional programs, to compute over these sets. We use an SMT solver (Satisfiability Modulo Theories) to perform set operations. To ensure termination of the modified interpreter, we apply ideas from abstract interpretation: fixed point computation, and widening. Our initial implementation produces good results, but it is slow for larger examples. We show how to speed it up a thousandfold, by relying less on SMT
Program Transformations in Magnolia
We explore program transformations in the context of the Magnolia programming language. We discuss research and implementations of transformation techniques, scenarios to put them to use in Magnolia, interfacing with transformations, and potential workflows and tooling that this approach to programming enables.Vi utforsker program transformasjoner med tanke på programmeringsspråket Magnolia. Vi diskuterer forsking og implementasjoner av transformasjonsteknikker, sammenhenger der vi kan bruke dei i Magnolia, grensesnitt til transformasjoner, og potensielle arbeidsflyt og verktøy som denne tilnærmingen til programmering kan tillate og fremme.Masteroppgåve i informatikkINF39
An Automatic Program Generator for Multi-Level Specialization
Program specialization can divide a computation into several computation stages. This paper investigates the theoretical limitations and practical problems of standard specialization tools, presents multi-level specialization, and demonstrates that, in combination with the cogen approach, it is far more practical than previously supposed. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multi-level specialization by self-application. Our approach to multi-level specialization seems well-suited for applications where generation time and program size are critical
La noción de principalidad en especialización de tipos
Cuando se consideran las maneras de producir programas, la técnica principal queviene a la mente de cada uno de nosotros es la de escribir el programa a mano. Aunqueexisten técnicas de derivación y herramientas para producir programas automáticamente,su aplicación está usualmente restringida a cierta clase de problemas, o ciertos dominios (como los generadores de parsers, o las interfaces visuales). En los casos donde talestécnicas se pueden aplicar, la productividad y la confiabilidad se ven ampliamente incrementadas. Por esa razón, nos interesamos en la producción automática de programasen un marco general. La especialización de programas es una manera particular de producir programasautomáticamente. En ella se utiliza un programa fuente general dado para generardiversas versiones particulares, especializadas, del mismo, cada una resolviendo una instanciaparticular del problema original. La técnica más conocida y más ampliamenteestudiada de especialización de programas es llamada evaluación parcial; se la ha utilizadocon éxito en varias áreas de aplicación diferentes. Sin embargo, la evaluaciónparcial tiene problemas cuando se considera la producción automática de programas contipos. La especialización de tipos es una forma de especialización de programas que puedeproducir automáticamente programas con tipos a partir de uno fuente. Comprendediversas técnicas muy poderosas, tales como especialización polivariante, especializaciónde constructores, conversión de clausuras; es la primera de las variantes de especializaciónde programas que puede generar tipos arbitrarios a partir de un único programa fuente. Creemos que la especialización de tipos puede ser la base sobre la que desarrollar unmarco de producción automática de programas. En esta tesis consideramos la especialización de programas, extendiéndola para producirprogramas polimórficos. Ilustramos eso considerando un intérprete para un lambdacálculo con tipos a la Hindley-Milner, y especializándolo con cualquier programa objetopara producir un programa residual que sea esencialmente igual que el original. En labúsqueda de la generación de polimorfismo, extendemos la especialización de tipos paraque pueda expresar la especialización de programas con información estática incompleta,y probamos que para cada término podemos inferir una especialización particular quepuede ser usada para reconstruir cada uno de las otras especializaciones de tal término. Llamamos especialización de tipos principal a tal técnica, debido a la analogía de estapropiedad con la noción de tipos principales. Nuestra presentación clarifica muchos delos problemas existentes en la especialización de tipos, lo cual puede ser usado como unaguía en la búsqueda de soluciones para ellos. La presentación se divide en cuatro partes. En la primera Parte, presentamos la Especialización de Tipos en su forma original, junto con material complementario. Enla Parte II desarrollamos la presentación de la Especialización de Tipos Principal, explicandotodos los detalles técnicos, dando varios ejemplos, y presentando nuestra implementación del prototipo. En la Parte III describimos las posibilidades de la formulaciónnueva, proveyendo una extensión de la Especialización de Tipos para generar programaspolimórficos. Finalmente, la última parte describe trabajos relacionados, trabajosfuturos, y concluye. Este trabajo es el resultado de siete a˜nos de investigación, realizados durante misestudios de doctorado.When considering the ways in which programs are produced, the main techniquethat comes to everybody’s mind is writing by hand. Although derivation techniquesand tools for producing programs exist, their application is usually restricted to certainkind of problems, or certain domains (such as parsing generators, or visual interfaces). In those cases where such techniques can be applied, productivity and reliability arehighly boosted. For that reason, we are concerned with the automatic production ofprograms in a general setting. Program specialisation is a particular way to produce programs automatically. Agiven, general source program is used to generate several particular, specialised versionsof it, each one solving a particular instance of the original problem. The best-known andthoroughly studied technique for program specialisation is called partial evaluation; ithas been successfully used in several different application areas. But partial evaluationfalls short when automatic production of typed programs is considered. Type specialisation is a form of program specialisation that can automatically producetyped programs from some general source one. It comprises several powerful techniques,such as polyvariant specialisation, constructor specialisation, and closure conversion,and it is the first variant of program specialisation that can generate arbitrarytypes from a single source program. We believe that type specialisation can be the basisin which to develop a framework for automatic program production. In this thesis we consider type specialisation, extending it to produce polymorphicprograms. We illustrate that by considering an interpreter for Hindley-Milner typedlambda-calculus, and specialising it to any given object program, producing a residualprogram that is essentially the same as the original one. In achieving the generationof polymorphism, we extend type specialisation to be able to express specialisation ofprograms with incomplete static information, and prove that for each term we can infera particular specialisation that can be used to reconstruct every other for that term. Wecall that principal type specialisation because of the analogy this property has with thenotion of principal types. Our presentation clarifies some of the problems existing intype specialisation, clarification that can be used as a guide in the search for solutionsto them. The presentation is divided in four parts. In the first Part we present Type Specialisationin its original form, together with some background material. In Part II wedevelop the presentation of Principal Type Specialisation, explaining all the technicaldetails, offering several examples, and presenting our prototype implementation. In Part III we describe the posibilities of the new formulation, by providing an extensionof Type Specialisation to generate polymorphic programs. And finally, in the last Partwe describe related and future work and conclude. This work is the result of seven years of research, performed during my PhD studies.Fil: Martínez López, Pablo E.. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina
Accelerating interpreted programming languages on GPUs with just-in-time compilation and runtime optimisations
Nowadays, most computer systems are equipped with powerful parallel devices
such as Graphics Processing Units (GPUs). They are present in almost every computer
system including mobile devices, tablets, desktop computers and servers. These
parallel systems have unlocked the possibility for many scientists and companies to
process significant amounts of data in shorter time. But the usage of these parallel
systems is very challenging due to their programming complexity. The most common
programming languages for GPUs, such as OpenCL and CUDA, are created for expert
programmers, where developers are required to know hardware details to use GPUs.
However, many users of heterogeneous and parallel hardware, such as economists,
biologists, physicists or psychologists, are not necessarily expert GPU programmers.
They have the need to speed up their applications, which are often written in high-level
and dynamic programming languages, such as Java, R or Python. Little work has
been done to generate GPU code automatically from these high-level interpreted and
dynamic programming languages. This thesis presents a combination of a programming
interface and a set of compiler techniques which enable an automatic translation
of a subset of Java and R programs into OpenCL to execute on a GPU. The goal is
to reduce the programmability and usability gaps between interpreted programming
languages and GPUs.
The first contribution is an Application Programming Interface (API) for programming
heterogeneous and multi-core systems. This API combines ideas from functional
programming and algorithmic skeletons to compose and reuse parallel operations.
The second contribution is a new OpenCL Just-In-Time (JIT) compiler that automatically
translates a subset of the Java bytecode to GPU code. This is combined with
a new runtime system that optimises the data management and avoids data transformations
between Java and OpenCL. This OpenCL framework and the runtime system
achieve speedups of up to 645x compared to Java within 23% slowdown compared to
the handwritten native OpenCL code.
The third contribution is a new OpenCL JIT compiler for dynamic and interpreted
programming languages. While the R language is used in this thesis, the developed
techniques are generic for dynamic languages. This JIT compiler uniquely combines
a set of existing compiler techniques, such as specialisation and partial evaluation, for
OpenCL compilation together with an optimising runtime that compile and execute R
code on GPUs. This JIT compiler for the R language achieves speedups of up to 1300x
compared to GNU-R and 1.8x slowdown compared to native OpenCL
On the Semantics of Intensionality and Intensional Recursion
Intensionality is a phenomenon that occurs in logic and computation. In the
most general sense, a function is intensional if it operates at a level finer
than (extensional) equality. This is a familiar setting for computer
scientists, who often study different programs or processes that are
interchangeable, i.e. extensionally equal, even though they are not implemented
in the same way, so intensionally distinct. Concomitant with intensionality is
the phenomenon of intensional recursion, which refers to the ability of a
program to have access to its own code. In computability theory, intensional
recursion is enabled by Kleene's Second Recursion Theorem. This thesis is
concerned with the crafting of a logical toolkit through which these phenomena
can be studied. Our main contribution is a framework in which mathematical and
computational constructions can be considered either extensionally, i.e. as
abstract values, or intensionally, i.e. as fine-grained descriptions of their
construction. Once this is achieved, it may be used to analyse intensional
recursion.Comment: DPhil thesis, Department of Computer Science & St John's College,
University of Oxfor
- …