18 research outputs found

    Automated Derivation of Random Generators for Algebraic Data Types

    Get PDF
    Many testing techniques such as generational fuzzing or random property-based testing require the existence of some sort of random generation process for the values used as test inputs. Implementing such generators is usually a task left to end-users, who do their best to come up with somewhat sensible implementations after several iterations of trial and error. This necessary effort is of no surprise, implementing good random data generators is a hard task. It requires deep knowledge about both the domain of the data being generated, as well as the behavior of the stochastic process generating such data. In addition, when the data we want to generate has a large number of possible variations, this process is not only intricate, but also very cumbersome. To mitigate this issues, this thesis explores different ideas for automatically deriving random generators based on existing static information. In this light, we design and implement different derivation algorithms in Haskell for obtaining random generators of values encoded using Algebraic Data Types (ADTs). Although there exists other tools designed directly or indirectly for this very purpose, they are not without disadvantages. In particular, we aim to tackle the lack of flexibility and static guarantees in the distribution induced by derived generators. We show how automatically derived generators for ADTs can be framed using a simple yet powerful stochastic model. This models can be used to obtain analytical guarantees about the distribution of values produced by the derived generators. This, in consequence, can be used to optimize the stochastic generation parameters of the derived generators towards target distributions set by the user, providing more flexible derivation mechanisms

    Set-theoretic Types for Erlang

    Full text link
    Erlang is a functional programming language with dynamic typing. The language offers great flexibility for destructing values through pattern matching and dynamic type tests. Erlang also comes with a type language supporting parametric polymorphism, equi-recursive types, as well as union and a limited form of intersection types. However, type signatures only serve as documentation, there is no check that a function body conforms to its signature. Set-theoretic types and semantic subtyping fit Erlang's feature set very well. They allow expressing nearly all constructs of its type language and provide means for statically checking type signatures. This article brings set-theoretic types to Erlang and demonstrates how existing Erlang code can be statically typechecked without or with only minor modifications to the code. Further, the article formalizes the main ingredients of the type system in a small core calculus, reports on an implementation of the system, and compares it with other static typecheckers for Erlang.Comment: 14 pages, 9 figures, IFL 2022; latexmk -pdf to buil

    Pattern discovery for parallelism in functional languages

    Get PDF
    No longer the preserve of specialist hardware, parallel devices are now ubiquitous. Pattern-based approaches to parallelism, such as algorithmic skeletons, simplify traditional low-level approaches by presenting composable high-level patterns of parallelism to the programmer. This allows optimal parallel configurations to be derived automatically, and facilitates the use of different parallel architectures. Moreover, parallel patterns can be swap-replaced for sequential recursion schemes, thus simplifying their introduction. Unfortunately, there is no guarantee that recursion schemes are present in all functional programs. Automatic pattern discovery techniques can be used to discover recursion schemes. Current approaches are limited by both the range of analysable functions, and by the range of discoverable patterns. In this thesis, we present an approach based on program slicing techniques that facilitates the analysis of a wider range of explicitly recursive functions. We then present an approach using anti-unification that expands the range of discoverable patterns. In particular, this approach is user-extensible; i.e. patterns developed by the programmer can be discovered without significant effort. We present prototype implementations of both approaches, and evaluate them on a range of examples, including five parallel benchmarks and functions from the Haskell Prelude. We achieve maximum speedups of 32.93x on our 28-core hyperthreaded experimental machine for our parallel benchmarks, demonstrating that our approaches can discover patterns that produce good parallel speedups. Together, the approaches presented in this thesis enable the discovery of more loci of potential parallelism in pure functional programs than currently possible. This leads to more possibilities for parallelism, and so more possibilities to take advantage of the potential performance gains that heterogeneous parallel systems present

    Persistance du cache d’AntidoteDB : Conception et mise en œuvre d’un cache pour un datastore de CRDT

    Get PDF
    Many services, today, rely on Geo-replicated databases. Geo-replication improves performance by moving a copy of the data closer to its usage site. High availability is achieved by maintaining copies of this data in several locations. Performance is gained by distributing the data and allowing multiple requests to be served at once. But, replicating data can lead to an inconsistent global state of the database when updates compete with each other.In this work, we study how a cache is designed and implemented, for a database that prevents state inconsistencies by using CRDTs. Further, we study how this cache can be persisted into a checkpoint store and measure the performance of our design with several benchmarks. The implementation of the system is based on AntidoteDB. An additional library is implemented to realise the discussed design.De nombreux services reposent aujourd’hui sur des bases de données géo-répliquées. La géo-réplication améliore les performances en rapprochant une copie des données de leur site d’utilisation. La haute disponibilité est obtenue en maintenant des copies de ces données à plusieurs endroits. Les performances sont améliorées en distribuant les données et en permettant à plusieurs requêtes d’être servies en même temps. Cependant, la réplication des données peut conduire à un état global incohérent de la base de données lorsque les mises à jour sont en concurrence les unes avec les autres.Dans ce travail, nous étudions la conception et la mise en œuvre d'une cache, pour une base de données qui convergente utilisant les CRDTs. De plus, nous étudions comment persister le cache en en stockant des instantanés ; enfin, nous mesurons la performance du système ainsi conçu grâce à plusieurs bancs d'essai. La mise en œuvre est basée sur Antidote DB, comme une bibliothèque

    Applications and extensions of context-sensitive rewriting

    Full text link
    [EN] Context-sensitive rewriting is a restriction of term rewriting which is obtained by imposing replacement restrictions on the arguments of function symbols. It has proven useful to analyze computational properties of programs written in sophisticated rewriting-based programming languages such asCafeOBJ, Haskell, Maude, OBJ*, etc. Also, a number of extensions(e.g., to conditional rewritingor constrained equational systems) and generalizations(e.g., controlled rewritingor forbidden patterns) of context-sensitive rewriting have been proposed. In this paper, we provide an overview of these applications and related issues. (C) 2021 Elsevier Inc. All rights reserved.Partially supported by the EU (FEDER), and projects RTI2018-094403-B-C32 and PROMETEO/2019/098.Lucas Alba, S. (2021). Applications and extensions of context-sensitive rewriting. Journal of Logical and Algebraic Methods in Programming. 121:1-33. https://doi.org/10.1016/j.jlamp.2021.10068013312

    Liberating Effects with Rows and Handlers

    Get PDF

    Structured arrows : a type-based framework for structured parallelism

    Get PDF
    This thesis deals with the important problem of parallelising sequential code. Despite the importance of parallelism in modern computing, writing parallel software still relies on many low-level and often error-prone approaches. These low-level approaches can lead to serious execution problems such as deadlocks and race conditions. Due to the non-deterministic behaviour of most parallel programs, testing parallel software can be both tedious and time-consuming. A way of providing guarantees of correctness for parallel programs would therefore provide significant benefit. Moreover, even if we ignore the problem of correctness, achieving good speedups is not straightforward, since this generally involves rewriting a program to consider a (possibly large) number of alternative parallelisations. This thesis argues that new languages and frameworks are needed. These language and frameworks must not only support high-level parallel programming constructs, but must also provide predictable cost models for these parallel constructs. Moreover, they need to be built around solid, well-understood theories that ensure that: (a) changes to the source code will not change the functional behaviour of a program, and (b) the speedup obtained by doing the necessary changes is predictable. Algorithmic skeletons are parametric implementations of common patterns of parallelism that provide good abstractions for creating new high-level languages, and also support frameworks for parallel computing that satisfy the correctness and predictability requirements that we require. This thesis presents a new type-based framework, based on the connection between structured parallelism and structured patterns of recursion, that provides parallel structures as type abstractions that can be used to statically parallelise a program. Specifically, this thesis exploits hylomorphisms as a single, unifying construct to represent the functional behaviour of parallel programs, and to perform correct code rewritings between alternative parallel implementations, represented as algorithmic skeletons. This thesis also defines a mechanism for deriving cost models for parallel constructs from a queue-based operational semantics. In this way, we can provide strong static guarantees about the correctness of a parallel program, while simultaneously achieving predictable speedups.“This work was supported by the University of St Andrews (School of Computer Science); by the EU FP7 grant “ParaPhrase:Parallel Patterns Adaptive Heterogeneous Multicore Systems” (n. 288570); by the EU H2020 grant “RePhrase: Refactoring Parallel Heterogeneous Resource-Aware Applications - a Software Engineering Approach” (ICT-644235), by COST Action IC1202 (TACLe), supported by COST (European Cooperation Science and Technology); and by EPSRC grant “Discovery: Pattern Discovery and Program Shaping for Manycore Systems” (EP/P020631/1)” -- Acknowledgement

    Historia, evolución y perspectivas de futuro en la utilización de técnicas de simulación en la gestión portuaria: aplicaciones en el análisis de operaciones, estrategia y planificación portuaria

    Get PDF
    Programa Oficial de Doutoramento en Análise Económica e Estratexia Empresarial. 5033V0[Resumen] Las técnicas de simulación, tal y como hoy las conocemos, comenzaron a mediados del siglo XX; primero con la aparición del primer computador y el desarrollo del método Monte Carlo, y más tarde con el desarrollo del primer simulador de propósito específico conocido como GPS y desarrollado por Geoffrey Gordon en IBM y la publicación del primer texto completo dedicado a esta materia y llamado the Art of Simulation (K.D. Tocher, 1963). Estás técnicas han evolucionado de una manera extraordinaria y hoy en día están plenamente implementadas en diversos campos de actividad. Las instalaciones portuarias no han escapado de esta tendencia, especialmente las dedicadas al tráfico de contenedores. Efectivamente, las características intrínsecas de este sector económico, le hacen un candidato idóneo para la implementación de modelos de simulación con propósitos y alcances muy diversos. No existe, sin embargo y hasta lo que conocemos, un trabajo científico que compile y analice pormenorizadamente tanto la historia como la evolución de simulación en ambientes portuarios, ayudando a clasificar los mismos y determinar cómo estos pueden ayudar en el análisis económico de estas instalaciones y en la formulación de las oportunas estrategias empresariales. Este es el objetivo último de la presente tesis doctoral.[Resumo] As técnicas de simulación, tal e como hoxe as coñecemos, comezaron a mediados do século XX; primeiro coa aparición do computador e o desenvolvemento do método Monte Carlo e máis tarde co desenvolvemento do primeiro simulador de propósito específico coñecido como GPS e desenvolvido por Geoffrey Gordon en IBM e a publicación do primeiro texto completo dedicado a este tema chamado “A Arte da Simulación” (K.D. Tocher, 1963). Estas técnicas evolucionaron dun xeito extraordinario e hoxe en día están plenamente implementadas en diversos campos de actividade. As instalacións portuarias non escaparon desta tendencia, especialmente as dedicadas ao tráfico de contenedores. Efectivamente, as características intrínsecas deste sector económico, fanlle un candidato idóneo para a implementación de modelos de simulación con propósitos e alcances moi variados. Con todo, e ata o que coñecemos, non existe un traballo científico que compila e analiza de forma detallada tanto a historia como a evolución da simulación en estes ambientes portuarios, clasificando os mesmos e determinando como estes poden axudar na análise económica destas instalacións e na formulación das oportunas estratexias empresariais. Este é o último obxectivo da presente tese doutoral.[Abstract] Simulation, to the extend that we understand it nowadays, began in the middle of the 20th century; first with the appearance of the computer and the development of the Monte Carlo method, and later with the development of the first specific purpose simulator known as GPS developed by Geoffrey Gordon in IBM. This author published the first full text devoted to this subject “The Art of Simulation” in 1963. These techniques have evolved in an extraordinary way and nowadays they are fully implemented in different fields of activity. Port facilities have not escaped this trend, especially those dedicated to container traffic. Indeed, the intrinsic characteristics of this economic sector, make it a suitable candidate for the implementation of simulation with very different purposes and scope. However, to the best of our knowelegde, there is not a scientific work that compiles and analyzes in detail both, the history and the evolution of simulation in port environments, contributing to classify them and determine how they can help in the economic analysis of these facilities and in the formulation of different business strategies. This is the ultimate goal of this doctoral thesis

    Foundations of Information-Flow Control and Effects

    Get PDF
    In programming language research, information-flow control (IFC) is a technique for enforcing a variety of security aspects, such as confidentiality of data,on programs. This Licenciate thesis makes novel contributions to the theory and foundations of IFC in the following ways: Chapter A presents a new proof method for showing the usual desired property of noninterference; Chapter B shows how to securely extend the concurrent IFC language MAC with asynchronous exceptions; and, Chapter C presents a new and simpler language for IFC with effects based on an explicit separation of pure and effectful computations
    corecore