226 research outputs found

    Acta Cybernetica : Volume 17. Number 2.

    Get PDF

    Basic Pattern Matching Calculi: a Fresh View on Matching Failure

    Full text link
    Abstract. We propose pattern matching calculi as a refinement of λ-calculus that integrates mechanisms appropriate for fine-grained mod-elling of non-strict pattern matching. Compared with the functional rewriting strategy usually employed to define the operational semantics of pattern matching in non-strict functional programming languages like Haskell or Clean, our pattern matching calculi achieve the same effects using simpler and more local rules. The main device is to embed into expressions the separate syntactic cate-gory of matchings; the resulting language naturally encompasses pattern guards and Boolean guards as special cases. By allowing a confluent reduction system and a normalising strategy, these pattern matching calculi provide a new basis for operational semantics of non-strict programming languages and also for implemen-tations.

    Effectful Programming in Declarative Languages with an Emphasis on Non-Determinism: Applications and Formal Reasoning

    Get PDF
    This thesis investigates effectful declarative programming with an emphasis on non-determinism as an effect. On the one hand, we are interested in developing applications using non-determinism as underlying implementation idea. We discuss two applications using the functional logic programming language Curry. The key idea of these implementations is to exploit the interplay of non-determinism and non-strictness that Curry employs. The first application investigates sorting algorithms parametrised over a comparison function. By applying a non-deterministic predicate to these sorting functions, we gain a permutation enumeration function. We compare the implementation in Curry with an implementation in Haskell that uses a monadic interface to model non-determinism. The other application that we discuss in this work is a library for probabilistic programming. Instead of modelling distributions as list of event and probability pairs, we model distributions using Curry's built-in non-determinism. In both cases we observe that the combination of non-determinism and non-strictness has advantages over an implementation using lists to model non-determinism. On the other hand, we present an idea to apply formal reasoning on effectful declarative programming languages. In order to start with simple effects, we focus on modelling a functional subset first. That is, the effects of interest are totality and partiality. We then observe that the general scheme to model these two effects can be generalised to capture a wide range of effects. Obviously, the next step is to apply the idea to model non-determinism. More precisely, we implement a model for the non-determinism of Curry: non-strict non-determinism with call-time choice. Therefore, we finally discuss why the current representation models call-by-name rather than Curry's call-by-need semantics and give an outlook on ideas to tackle this problem.Diese Arbeit beschäftigt sich mit der deklarativen Programmierung mit Effekten und legt dabei besonderen Fokus auf Nichtdeterminismus als Effekt. Einerseits möchten wir Anwendungen entwickeln, deren zugrundeliegende Implementierungsidee auf Nichtdeterminismus basiert. Wir stellen dazu zwei beispielhafte Anwendungen vor, die in der funktional logischen Programmiersprache Curry implementiert sind. Die Kernidee dieser Implementierungen ist dabei die Kombination von Nichtstriktheit und Nichtdeterminismus, die Curry unterliegen, gewinnbringend auszunutzen. Für die erste Anwendung untersuchen wir Sortierfunktionen, die über eine Vergleichsfunktion parametrisiert sind, und wenden diese Funktionen auf ein nichtdeterministisches Prädikat an. Dabei entsteht eine Funktion, die Permutationen der Eingabeliste berechnet. Wir vergleichen unsere Implementierung in Curry mit einer Implementierung in Haskell, die den Nichtdeterminismus monadisch modelliert. Als zweite Anwendung werden wir über eine Bibliothek zur probabilistischen Programmierung diskutieren. Statt der üblichen Modellierung von Wahrscheinlichkeitsverteilungen als Liste von Paaren von Ereignis- und korrespondierenden Wahrscheinlichkeitswerten modellieren wir diese Verteilungen mithilfe von Currys nativem Nichtdeterminismus. Beide Implementierungen haben durch die Kombination von Nichtdeterminismus und Nichtstriktheit Vorteile gegenüber einer Implementierung, die den Nichtdeterminismus durch Listen repräsentiert. Andererseits möchten wir eine Möglichkeit schaffen, über die Programme, die wir in effektbehafteten deklarativen Programmiersprachen entwickelt haben, in einem formalen Rahmen zu argumentieren. Dabei fangen wir mit der Teilmenge der rein funktionalen Effekte an, das heißt, wir interessieren uns zunächst für totale und partielle Programme. Die zugrundeliegende Idee zur Modellierung dieser zwei Effekte kann dann auch für weitere Effekte genutzt werden. Als natürlichen nächsten Schritt betrachten wir den Effekt, der bei der Sprache Curry zusätzlich hinzukommt: nicht-strikter Nichtdeterminismus mit call-time choice Semantik. Dabei geben wir eine Übersicht darüber, warum die aktuelle Repräsentation call-by-name modelliert, sowie erste Ideen, wie die für Curry erforderliche call-by-need Semantik modelliert werden könnte

    Ask-Elle:an Adaptable Programming Tutor for Haskell Giving Automated Feedback

    Get PDF
    Ask-Elle is a tutor for learning the higher-order, strongly-typed functional programming language Haskell. It supports the stepwise development of Haskell programs by verifying the correctness of incomplete programs, and by providing hints. Programming exercises are added to Ask-Elle by providing a task description for the exercise, one or more model solutions, and properties that a solution should satisfy. The properties and model solutions can be annotated with feedback messages, and the amount of flexibility that is allowed in student solutions can be adjusted. The main contribution of our work is the design of a tutor that combines (1) the incremental development of different solutions in various forms to a programming exercise with (2) automated feedback and (3) teacher-specified programming exercises, solutions, and properties. The main functionality is obtained by means of strategy-based model tracing and property-based testing. We have tested the feasibility of our approach in several experiments, in which we analyse both intermediate and final student solutions to programming exercises, amongst others

    Use of proofs-as-programs to build an anology-based functional program editor

    Get PDF
    This thesis presents a novel application of the technique known as proofs-as-programs. Proofs-as-programs defines a correspondence between proofs in a constructive logic and functional programs. By using this correspondence, a functional program may be represented directly as the proof of a specification and so the program may be analysed within this proof framework. CʸNTHIA is a program editor for the functional language ML which uses proofs-as-programs to analyse users' programs as they are written. So that the user requires no knowledge of proof theory, the underlying proof representation is completely hidden. The proof framework allows programs written in CʸNTHIA to be checked to be syntactically correct, well-typed, well-defined and terminating. CʸNTHIA also embodies the idea of programming by analogy — rather than starting from scratch, users always begin with an existing function definition. They then apply a sequence of high-level editing commands which transform this starting definition into the one required. These commands preserve correctness and also increase programming efficiency by automating commonly occurring steps. The design and implementation of CʸNTHIA is described and its role as a novice programming environment is investigated. Use by experts is possible but only a sub-set of ML is currently supported. Two major trials of CʸNTHIA have shown that CʸNTHIA is well-suited as a teaching tool. Users of CʸNTHIA make fewer programming errors and the feedback facilities of CʸNTHIA mean that it is easier to track down the source of errors when they do occur

    Termination and Productivity

    Get PDF
    Klop, J.W. [Promotor]Vrijer, R.C. de [Copromotor

    Inductive representation, proofs and refinement of pointer structures

    Get PDF
    Cette thèse s'intègre dans le domaine général des méthodes formelles qui donnent une sémantique aux programmes pour vérifier formellement des propriétés sur ceux-ci. Sa motivation originale provient d'un besoin de certification des systèmes industriels souvent développés à l'aide de l'Ingénierie Dirigée par les Modèles (IDM) et de langages orientés objets (OO). Pour transformer efficacement des modèles (ou graphes), il est avantageux de les représenter à l'aide de structures de pointeurs, économisant le temps et la mémoire grâce au partage qu'ils permettent. Cependant la vérification de propriétés sur des programmes manipulant des pointeurs est encore complexe. Pour la simplifier, nous proposons de démarrer le développement par une implémentation haut-niveau sous la forme de programmes fonctionnels sur des types de données inductifs facilement vérifiables dans des assistants à la preuve tels que Isabelle/HOL. La représentation des structures de pointeurs est faite à l'aide d'un arbre couvrant contenant des références additionnelles. Ces programmes fonctionnels sont ensuite raffinés si nécessaire vers des programmes impératifs à l'aide de la bibliothèque Imperative_HOL. Ces programmes sont en dernier lieu extraits vers du code Scala (OO). Cette thèse décrit la méthodologie de représentation et de raffinement et fournit des outils pour la manipulation et la preuve de programmes OO dans Isabelle/HOL. L'approche est éprouvée par de nombreux exemples dont notamment l'algorithme de Schorr-Waite et la construction de Diagrammes de Décision Binaires (BDDs).This thesis stands in the general domain of formal methods that gives semantics to programs to formally prove properties about them. It originally draws its motivation from the need for certification of systems in an industrial context where Model Driven Engineering (MDE) and object-oriented (OO) languages are common. In order to obtain efficient transformations on models (graphs), we can represent them as pointer structures, allowing space and time savings through the sharing of nodes. However verification of properties on programs manipulating pointer structures is still hard. To ease this task, we propose to start the development with a high-level implementation embodied by functional programs manipulating inductive data-structures, that are easily verified in proof assistants such as Isabelle/HOL. Pointer structures are represented by a spanning tree adorned with additional references. These functional programs are then refined - if necessary - to imperative programs thanks to the library Imperative_HOL. These programs are finally extracted to Scala code (OO). This thesis describes this kind of representation and refinement and provides tools to manipulate and prove OO programs in Isabelle/HOL. This approach is put in practice with several examples, and especially with the Schorr-Waite algorithm and the construction of Binary Decision Diagrams (BDDs)

    Assessing and improving the quality of model transformations

    Get PDF
    Software is pervading our society more and more and is becoming increasingly complex. At the same time, software quality demands remain at the same, high level. Model-driven engineering (MDE) is a software engineering paradigm that aims at dealing with this increasing software complexity and improving productivity and quality. Models play a pivotal role in MDE. The purpose of using models is to raise the level of abstraction at which software is developed to a level where concepts of the domain in which the software has to be applied, i.e., the target domain, can be expressed e??ectively. For that purpose, domain-speci??c languages (DSLs) are employed. A DSL is a language with a narrow focus, i.e., it is aimed at providing abstractions speci??c to the target domain. This makes that the application of models developed using DSLs is typically restricted to describing concepts existing in that target domain. Reuse of models such that they can be applied for di??erent purposes, e.g., analysis and code generation, is one of the challenges that should be solved by applying MDE. Therefore, model transformations are typically applied to transform domain-speci??c models to other (equivalent) models suitable for di??erent purposes. A model transformation is a mapping from a set of source models to a set of target models de??ned as a set of transformation rules. MDE is gradually being adopted by industry. Since MDE is becoming more and more important, model transformations are becoming more prominent as well. Model transformations are in many ways similar to traditional software artifacts. Therefore, they need to adhere to similar quality standards as well. The central research question discoursed in this thesis is therefore as follows. How can the quality of model transformations be assessed and improved, in particular with respect to development and maintenance? Recall that model transformations facilitate reuse of models in a software development process. We have developed a model transformation that enables reuse of analysis models for code generation. The semantic domains of the source and target language of this model transformation are so far apart that straightforward transformation is impossible, i.e., a semantic gap has to be bridged. To deal with model transformations that have to bridge a semantic gap, the semantics of the source and target language as well as possible additional requirements should be well understood. When bridging a semantic gap is not straightforward, we recommend to address a simpli??ed version of the source metamodel ??rst. Finally, the requirements on the transformation may, if possible, be relaxed to enable automated model transformation. Model transformations that need to transform between models in di??erent semantic domains are expected to be more complex than those that merely transform syntax. The complexity of a model transformation has consequences for its quality. Quality, in general, is a subjective concept. Therefore, quality can be de??ned in di??erent ways. We de??ned it in the context of model transformation. A model transformation can either be considered as a transformation de??nition or as the process of transforming a source model to a target model. Accordingly, model transformation quality can be de??ned in two di??erent ways. The quality of the de??nition is referred to as its internal quality. The quality of the process of transforming a source model to a target model is referred to as its external quality. There are also two ways to assess the quality of a model transformation (both internal and external). It can be assessed directly, i.e., by performing measurements on the transformation de??nition, or indirectly, i.e., by performing measurements in the environment of the model transformation. We mainly focused on direct assessment of internal quality. However, we also addressed external quality and indirect assessment. Given this de??nition of quality in the context of model transformations, techniques can be developed to assess it. Software metrics have been proposed for measuring various kinds of software artifacts. However, hardly any research has been performed on applying metrics for assessing the quality of model transformations. For four model transformation formalisms with di??fferent characteristics, viz., for ASF+SDF, ATL, Xtend, and QVTO, we de??ned sets of metrics for measuring model transformations developed with these formalisms. While these metric sets can be used to indicate bad smells in the code of model transformations, they cannot be used for assessing quality yet. A relation has to be established between the metric sets and attributes of model transformation quality. For two of the aforementioned metric sets, viz., the ones for ASF+SDF and for ATL, we conducted an empirical study aiming at establishing such a relation. From these empirical studies we learned what metrics serve as predictors for di??erent quality attributes of model transformations. Metrics can be used to quickly acquire insights into the characteristics of a model transformation. These insights enable increasing the overall quality of model transformations and thereby also their maintainability. To support maintenance, and also development in a traditional software engineering process, visualization techniques are often employed. For model transformations this appears as a feasible approach as well. Currently, however, there are few visualization techniques available tailored towards analyzing model transformations. One of the most time-consuming processes during software maintenance is acquiring understanding of the software. We expect that this holds for model transformations as well. Therefore, we presented two complementary visualization techniques for facilitating model transformation comprehension. The ??rst-technique is aimed at visualizing the dependencies between the components of a model transformation. The second technique is aimed at analyzing the coverage of the source and target metamodels by a model transformation. The development of the metric sets, and in particular the empirical studies, have led to insights considering the development of model transformations. Also, the proposed visualization techniques are aimed at facilitating the development of model transformations. We applied the insights acquired from the development of the metric sets as well as the visualization techniques in the development of a chain of model transformations that bridges a number of semantic gaps. We chose to solve this transformational problem not with one model transformation, but with a number of smaller model transformations. This should lead to smaller transformations, which are more understandable. The language on which the model transformations are de??ned, was subject to evolution. In particular the coverage visualization proved to be bene??cial for the co-evolution of the model transformations. Summarizing, we de??ned quality in the context of model transformations and addressed the necessity for a methodology to assess it. Therefore, we de??ned metric sets and performed empirical studies to validate whether they serve as predictors for model transformation quality. We also proposed a number of visualizations to increase model transformation comprehension. The acquired insights from developing the metric sets and the empirical studies, as well as the visualization tools, proved to be bene??cial for developing model transformations
    • …
    corecore