596 research outputs found

    Constructing medium sized efficient functional programs in Clean

    Get PDF
    Contains fulltext : 107652.pdf (author's version ) (Open Access

    The Functional Machine Calculus

    Get PDF
    This paper presents the Functional Machine Calculus (FMC) as a simple model of higher-order computation with "reader/writer" effects: higher-order mutable store, input/output, and probabilistic and non-deterministic computation. The FMC derives from the lambda-calculus by taking the standard operational perspective of a call-by-name stack machine as primary, and introducing two natural generalizations. One, "locations", introduces multiple stacks, which each may represent an effect and so enable effect operators to be encoded into the abstraction and application constructs of the calculus. The second, "sequencing", is known from kappa-calculus and concatenative programming languages, and introduces the imperative notions of "skip" and "sequence". This enables the encoding of reduction strategies, including call-by-value lambda-calculus and monadic constructs. The encoding of effects into generalized abstraction and application means that standard results from the lambda-calculus may carry over to effects. The main result is confluence, which is possible because encoded effects reduce algebraically rather than operationally. Reduction generates the familiar algebraic laws for state, and unlike in the monadic setting, reader/writer effects combine seamlessly. A system of simple types confers termination of the machine

    A Lightweight Type System with Uniqueness and Typestates for the Java Cryptography API

    Get PDF
    Java cryptographic APIs facilitate building secure applications, but not all developers have strong cryptographic knowledge to use these APIs correctly. Several studies have shown that misuses of those cryptographic APIs may cause significant security vulnerabilities, compromising the integrity of applications and exposing sensitive data. Hence, it is an important problem to design methodologies and techniques, which can guide developers in building secure applications with minimum effort, and that are accessible to non-experts in cryptography. In this thesis, we present a methodology that reasons about the correct usage of Java cryptographic APIs with types, specifically targeting to cryptographic applications. Our type system combines aliasing control and the abstraction of object states into typestates, allowing users to express a set of user-defined disciplines on the use of cryptographic APIs and invariants on variable usage. More specifically, we employ the typestate automaton to depict typestates within our type system, and we control aliases by applying the principle of uniqueness to sensitive data. We mainly focus on the usage of initialization vectors. An initialization vector is a binary vector used as the input to initialize the state for the encryption of a plaintext block sequence. Randomization and uniqueness are crucial to an initialization vector. Failing to maintain a unique initialization vector for encryption can compromise confidentiality. Encrypting the same plaintext with the same initialization vector always yields the same ciphertext, thereby simplifying the attacker's task of guessing the cipher pattern. To address this problem practically, we implement our approach as a pluggable type system on top of the EISOP Checker Framework. To minimize the cryptographic expertise required by application developers looking to incorporate secure computing concepts into their software, our approach allows cryptographic experts to plug in the protocols into the system. In this setting, developers merely need to provide minimal annotations on sensitive data—requiring little cryptographic knowledge. We also evaluated our work by performing experiments over one benchmark and 7 real-world Java projects from Github. We found that 6 out 7 projects have security issues. In summary, we found 12 misuses in initialization vectors

    An Extensible Theorem Proving Frontend

    Get PDF
    Interaktive Theorembeweiser sind Softwarewerkzeuge zum computergestützten Beweisen, d.h. sie können entsprechend kodierte Beweise von logischen Aussagen sowohl verifizieren als auch beim Erstellen dieser unterstützen. In den letzten Jahren wurden weitreichende Formalisierungsprojekte über Mathematik sowie Programmverifikation mit solchen Theorembeweisern bewältigt. Der Theorembeweiser Lean insbesondere wurde nicht nur erfolgreich zum Verifizieren lange bekannter mathematischer Theoreme verwendet, sondern auch zur Unterstützung von aktueller mathematischer Forschung. Das Ziel des Lean-Projekts ist nichts weniger als die Arbeitsweise von Mathematikern grundlegend zu verändern, indem mit dem Computer formalisierte Beweise eine praktible Alternative zu solchen mit Stift und Papier werden sollen. Aufwändige manuelle Gutachten zur Korrektheit von Beweisen wären damit hinfällig und gleichzeitig wäre garantiert, dass alle nötigen Beweisschritte exakt erfasst sind, statt der Interpretation und dem Hintergrundwissen des Lesers überlassen zu sein. Um dieses Ziel zu erreichen, sind jedoch noch weitere Fortschritte hinsichtlich Effizienz und Nutzbarkeit von Theorembeweisern nötig. Als Schritt in Richtung dieses Ziels beschreibt diese Dissertation eine neue, vollständig erweiterbare Theorembeweiser-Benutzerschnittstelle ("frontend") im Rahmen von Lean 4, der nächsten Version von Lean. Aufgabe dieser Benutzerschnittstelle ist die textuelle Beschreibung und Entgegennahme der Beweiseingabe in einer Syntax, die mehrere teils widersprüchliche Ziele optimieren sollte: Kompaktheit, Lesbarkeit für menschliche Benutzer und Eindeutigkeit in der Interpretation durch den Theorembeweiser. Da in der geschriebenen Mathematik eine umfangreiche Menge an verschiedenen Notationen existiert, die von Jahr zu Jahr weiter wächst und sich gleichzeitig zwischen verschiedenen Feldern, Autoren oder sogar einzelnen Arbeiten unterscheiden kann, muss solch eine Schnittstelle es Benutzern erlauben, sie jederzeit mit neuen, ausdrucksfähigen Notationen zu erweitern und ihnen mit flexiblen Regeln Bedeutung zuzuschreiben. Dieser Wunsch nach Flexibilität der Eingabesprache lässt sich weiterhin auch auf der Ebene der einzelnen Beweisschritte ("Taktiken") sowie höheren Ebenen der Beweis- und Programmorganisation wiederfinden. Den Kernteil dieser gewünschten Erweiterbarkeit habe ich mit einem ausdrucksstarken Makrosystem für Lean realisiert, mit dem sich sowohl einfach Syntaxtransformationen ("syntaktischer Zucker") also auch komplexe, typgesteuerte Übersetzung in die Kernsprache des Beweisers ausdrücken lassen. Das Makrosystem basiert auf einem neuartigen Algorithmus für Makrohygiene, basierend auf dem der Lisp-Sprache Racket und von mir an die spezifischen Anforderungen von Theorembeweisern angepasst, dessen Aufgabe es ist zu gewährleisten, dass lexikalische Geltungsbereiche von Bezeichnern selbst für komplexe Makros wie intuitiv erwartet funktionieren. Besonders habe ich beim Entwurf des Makrosystems darauf geachtet, das System einfach zugänglich zu gestalten, indem mehrere Abstraktionsebenen bereitgestellt werden, die sich in ihrer Ausdrucksstärke unterscheiden, aber auf den gleichen fundamentalen Prinzipien wie der erwähnten Makrohygiene beruhen. Als ein Anwendungsbeispiel des Makrosystems beschreibe ich eine Erweiterung der aus Haskell bekannten "do"-Notation um weitere imperative Sprachfeatures. Die erweiterte Syntax ist in Lean 4 eingeflossen und hat grundsätzlich die Art und Weise verändert, wie sowohl Entwickler als auch Benutzer monadischen, aber auch puren Code schreiben. Das Makrosystem stellt das "Herz" des erweiterbaren Frontends dar, ist gleichzeitig aber auch eng mit anderen Softwarekomponenten innerhalb der Benutzerschnittstelle verknüpft oder von ihnen abhängig. Ich stelle das gesamte Frontend und das umgebende Lean-System vor mit Fokus auf Teilen, an denen ich maßgeblich mitgewirkt habe. Schließlich beschreibe ich noch ein effizientes Referenzzählungsschema für funktionale Programmierung, welches eine Neuimplementierung von Lean in Lean selbst und damit das erweiterbare Frontend erst ermöglicht hat. Spezifische Optimierungen darin zur Wiederverwendung von Allokationen vereinen, ähnlich wie die erweiterte do-Notation, die Vorteile von imperativer und pur funktionaler Programmierung in einem neuen Paradigma, das ich "pure imperative Programmierung" nenne

    Functional programming and graph algorithms

    Get PDF
    This thesis is an investigation of graph algorithms in the non-strict purely functional language Haskell. Emphasis is placed on the importance of achieving an asymptotic complexity as good as with conventional languages. This is achieved by using the monadic model for including actions on the state. Work on the monadic model was carried out at Glasgow University by Wadler, Peyton Jones, and Launchbury in the early nineties and has opened up many diverse application areas. One area is the ability to express data structures that require sharing. Although graphs are not presented in this style, data structures that graph algorithms use are expressed in this style. Several examples of stateful algorithms are given including union/find for disjoint sets, and the linear time sort binsort. The graph algorithms presented are not new, but are traditional algorithms recast in a functional setting. Examples include strongly connected components, biconnected components, Kruskal's minimum cost spanning tree, and Dijkstra's shortest paths. The presentation is lucid giving more insight than usual. The functional setting allows for complete calculational style correctness proofs - which is demonstrated with many examples. The benefits of using a functional language for expressing graph algorithms are quantified by looking at the issues of execution times, asymptotic complexity, correctness, and clarity, in comparison with traditional approaches. The intention is to be as objective as possible, pointing out both the weaknesses and the strengths of using a functional language

    Composite objects: dynamic representation and encapsulation by static classification of object references

    Get PDF
    The composition of several objects to one higher-level, composite object is a central technique in the construction of object-oriented software systems and for the management of their structural and dynamic complexity. Standard object-oriented programming languages, however, focus their support on the elementary objects and on class inheritance (the other central technique). They do not provide for the expression of objects' composition, and do not ensure any kind of encapsulation of composite objects. In particular, there is no guarantee that composite objects control the changes of their own state (state encapsulation). We propose to advance software quality by new program annotations that document the design with respect to object composition and, based on them, new static checks that exclude designs violating the encapsulation of composite objects' state. No significant restrictions are imposed on the composite objects' internal structure and dynamic construction. Common design patterns like Iterators and Abstract Factories are supported. We extend a subset of the Java language by mode annotations at all types of object references, and a user-specified classification of all methods into potentially state changing mutators and read-only observers. The modes superimpose composition relationships between objects connected by paths of references at run-time. The proposed mode system limits, orthogonally to the type system, the invocation of mutator methods (depending on the mode of the reference to the receiver object), the permissibility of reference passing (as parameter or result), and the compatibility between references of different modes. These restrictions statically guarantee state encapsulation relative to the mode-expressed object composition structure

    Compile-Time Optimisation of Store Usage in Lazy Functional Programs

    Get PDF
    Functional languages offer a number of advantages over their imperative counterparts. However, a substantial amount of the time spent on processing functional programs is due to the large amount of storage management which must be performed. Two apparent reasons for this are that the programmer is prevented from including explicit storage management operations in programs which have a purely functional semantics, and that more readable programs are often far from optimal in their use of storage. Correspondingly, two alternative approaches to the optimisation of store usage at compile-time are presented in this thesis. The first approach is called compile-time garbage collection. This approach involves determining at compile-time which cells are no longer required for the evaluation of a program, and making these cells available for further use. This overcomes the problem of a programmer not being able to indicate explicitly that a store cell can be made available for further use. Three different methods for performing compile-time garbage collection are presented in this thesis; compile-time garbage marking, explicit deallocation and destructive allocation. Of these three methods, it is found that destructive allocation is the only method which is of practical use. The second approach to the optimisation of store usage is called compile-time garbage avoidance. This approach involves transforming programs into semantically equivalent programs which produce less garbage at compile-time. This attempts to overcome the problem of more readable programs being far from optimal in their use of storage. In this thesis, it is shown how to guarantee that the process of compile-time garbage avoidance will terminate. Both of the described approaches to the optimisation of store usage make use of the information obtained by usage counting analysis. This involves counting the number of times each value in a program is used. In this thesis, a reference semantics is defined against which the correctness of usage counting analyses can be proved. A usage counting analysis is then defined and proved to be correct with respect to this reference semantics. The information obtained by this analysis is used to annotate programs for compile-time garbage collection, and to guide the transformation when compile-time garbage avoidance is performed. It is found that compile-time garbage avoidance produces greater increases in efficiency than compile-time garbage collection, but much of the garbage which can be collected by compile-time garbage collection cannot be avoided at compile-time. The two approaches are therefore complementary, and the expressions resulting from compile-time garbage avoidance transformations can be annotated for compile-time garbage collection to further optimise the use of storage

    Introduction to the Literature On Programming Language Design

    Get PDF
    This is an introduction to the literature on programming language design and related topics. It is intended to cite the most important work, and to provide a place for students to start a literature search
    corecore