21 research outputs found

    A Pure Embedding of Roles: Exploring 4-dimensional Dispatch for Roles in Structured Contexts

    Get PDF
    Present-day software systems have to fulfill an increasing number of requirements, which makes them more and more complex. Many systems need to anticipate changing contexts or need to adapt to changing business rules or requirements. The challenge of 21th-century software development will be to cope with these aspects. We believe that the role concept offers a simple way to adapt an object-oriented program to its changing context. In a role-based application, an object plays multiple roles during its lifetime. If the contexts are represented as first-class entities, they provide dynamic views to the object-oriented program, and if a context changes, the dynamic views can be switched easily, and the software system adapts automatically. However, the concepts of roles and dynamic contexts have been discussed for a long time in many areas of computer science. So far, their employment in an existing object-oriented language requires a specific runtime environment. Also, classical object-oriented languages and their runtime systems are not able to cope with essential role-specific features, such as true delegation or dynamic binding of roles. In addition to that, contexts and views seem to be important in software development. The traditional code-oriented approach to software engineering becomes less and less satisfactory. The support for multiple views of a software system scales much better to the needs of todays systems. However, it relies on programming languages to provide roles for the construction of views. As a solution, this thesis presents an implementation pattern for role-playing objects that does not require a specific runtime system, the SCala ROles Language (SCROLL). Via this library approach, roles are embedded in a statically typed base language as dynamically evolving objects. The approach is pure in the sense that there is no need for an additional compiler or tooling. The implementation pattern is demonstrated on the basis of the Scala language. As technical support from Scala, the pattern requires dynamic mixins, compiler-translated function calls, and implicit conversions. The details how roles are implemented are hidden in a Scala library and therefore transparent to SCROLL programmers. The SCROLL library supports roles embedded in structured contexts. Additionally, a four-dimensional, context-aware dispatch at runtime is presented. It overcomes the subtle ambiguities introduced with the rich semantics of role-playing objects. SCROLL is written in Scala, which blends a modern object-oriented with a functional programming language. The size of the library is below 1400 lines of code so that it can be considered to have minimalistic design and to be easy to maintain. Our approach solves several practical problems arising in the area of dynamical extensibility and adaptation

    09491 Abstracts Collection -- Graph Search Engineering

    Get PDF
    From the 29th November to the 4th December 2009, the Dagstuhl Seminar 09491 ``Graph Search Engineering \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Enforcing C++ type integrity with fast dynamic casting, member function protections and an exploration of C++ beneath the surface

    Get PDF
    The C++ type system provides a programmer with modular class features and inheritance capabilities. Upholding the integrity of all class types, known as type-safety, is paramount in preventing type vulnerabilities and exploitation. However, type confusion vulnerabilities are all too common in C++ programs. The lack of low-level type-awareness creates an environment where advanced exploits, like counterfeit object-orientated programming (COOP), can flourish. Although type confusion and COOP exist in different research fields, they both take advantage of inadequate enforcement of type-safety. Most type confusion defence research has focused on type inclusion testing, with varying degrees of coverage and performance overheads. COOP defences, on the other hand, have predominantly featured control flow integrity (CFI) defence measures, which until very recently, were thought to be sound. We investigate both of these topics and challenge prevailing wisdom, arguing that: 1. optimised dynamic casting is better suited to preventing type confusion and 2. enforcing type integrity may be the only defence against COOP. Type confusion vulnerabilities are often the result of substituting dynamic casting with an inappropriate static casting method. Dynamic casting is often avoided due to memory consumption and run-time overheads, with some developers turning off run-time type information (RTTI) altogether. However, without RTTI, developers lose not only secure casting but virtual inheritance as well. We argue that improving the performance of dynamic casting can make it a viable option for preventing type confusion vulnerabilities. In this thesis, we present MemCast, a memoising wrapper for the dynamic cast operator that increases its speed to that of a dynamic dispatch. A new variant of the COOP exploit (COOPLUS) has identified a weakness in almost all modern, C++-semantic-aware CFI defences. The weakness is that they allow derived class functions to be invoked using corrupted base class instances, specifically where an attacker replaces the object's virtual pointer with one from a derived type object. A CFI defence overestimates the set of target functions at a dispatch site to cover all possible control-flow paths of a polymorphic object. Thus COOPLUS takes advantage of the lack of type integrity between related types at dispatch sites. In this thesis, we argue that CFI is an unsuitable defence against COOPLUS, and type integrity must be applied. Hence we propose a type integrity defence called Member Function Integrity (MFI) that brings type awareness to member functions and prevents any member function from operating on an invalid object type. To understand the low-level techniques deployed in MemCast and our MFI defence policy, one has to appreciate the memory layout of the objects themselves and the conventions used by member functions that operate on them. However, in our research, we did not find adequate introductory literature specific to modern compilers. For this reason, we supplied our own self-contained introduction to low-level object-orientation. This thesis has three contributions: a primer on C++ object layouts, an optimised dynamic casting technique that reduces the casting cost to that of a dynamic dispatch, and a new defence policy proposal (MFI) to mitigate all known COOP exploits

    Functional programming and embedded systems.

    Get PDF
    Embedded computer systems seem to be the antithesis of functional language systems. Embedded systems are small, stand-alone, and are often forced to accept inelegant design compromises due to hardware cost. They run continuously and are reactive, that is, their primary goal is to monitor sensors and control effectors, using observed external events to trigger state-changing control actions. Yet this thesis describes how functional abstraction can tame the inelegance of embedded systems. Architectural compromises can be made in device drivers, programmed within the functional language, but a function-level interface is presented to the application programmer. Four modifications are introduced to a test-bed purely-functional language in order to facilitate embedded-systems programming: I/O register access; communicating processes; interrupts; and a real-time incremental garbage collector. Referential transparency is preserved. The conventional model of communicating processes is augmente..

    Profiling large-scale lazy functional programs

    Get PDF
    The LOLITA natural language processing system is an example of one of the ever increasing number of large-scale systems written entirely in a functional programming language. The system consists of over 50,000 lines of Haskell code and is able to perform a number of tasks such as semantic and pragmatic analysis of text, context scanning and query analysis. Such a system is more useful if the results are calculated in real-time, therefore the efficiency of such a system is paramount. For the past three years we have used profiling tools supplied with the Haskell compilers GHC and HBC to analyse and reason about our programming solutions and have achieved good results; however, our experience has shown that the profiling life-cycle is often too long to make a detailed analysis of a large system possible, and the profiling results are often misleading. A profiling system is developed which allows three types of functionality not previously found in a profiler for lazy functional programs. Firstly, the profiler is able to produce results based on an accurate method of cost inheritance. We have found that this reduces the possibility of the programmer obtaining misleading profiling results. Secondly, the programmer is able to explore the results after the execution of the program. This is done by selecting and deselecting parts of the program using a post-processor. This greatly reduces the analysis time as no further compilation, execution or profiling of the program is needed. Finally, the new profiling system allows the user to examine aspects of the run-time call structure of the program. This is useful in the analysis of the run-time behaviour of the program. Previous attempts at extending the results produced by a profiler in such a way have failed due to the exceptionally high overheads. Exploration of the overheads produced by the new profiling scheme show that typical overheads in profiling the LOLITA system are: a 10% increase in compilation time; a 7% increase in executable size and a 70% run-time overhead. These overheads mean a considerable saving in time in the detailed analysis of profiling a large, lazy functional program

    Logical Reasoning for Higher-Order Functions with Local State

    Full text link
    We introduce an extension of Hoare logic for call-by-value higher-order functions with ML-like local reference generation. Local references may be generated dynamically and exported outside their scope, may store higher-order functions and may be used to construct complex mutable data structures. This primitive is captured logically using a predicate asserting reachability of a reference name from a possibly higher-order datum and quantifiers over hidden references. We explore the logic's descriptive and reasoning power with non-trivial programming examples combining higher-order procedures and dynamically generated local state. Axioms for reachability and local invariant play a central role for reasoning about the examples.Comment: 68 page

    Liquidate your assets: reasoning about resource usage in liquid Haskell

    Get PDF
    Liquid Haskell is an extension to the type system of Haskell that supports formal reasoning about program correctness by encoding logical properties as refinement types. In this article, we show how Liquid Haskell can also be used to reason about program efficiency in the same setting. We use the system’s existing verification machinery to ensure that the results of our cost analysis are valid, together with custom invariants for particular program contexts to ensure that the results of our analysis are precise. To illustrate our approach, we analyse the efficiency of a wide range of popular data structures and algorithms, and in doing so, explore various notions of resource usage. Our experience is that reasoning about efficiency in Liquid Haskell is often just as simple as reasoning about correctness, and that the two can naturally be combined

    Workshop on Database Programming Languages

    Get PDF
    These are the revised proceedings of the Workshop on Database Programming Languages held at Roscoff, Finistère, France in September of 1987. The last few years have seen an enormous activity in the development of new programming languages and new programming environments for databases. The purpose of the workshop was to bring together researchers from both databases and programming languages to discuss recent developments in the two areas in the hope of overcoming some of the obstacles that appear to prevent the construction of a uniform database programming environment. The workshop, which follows a previous workshop held in Appin, Scotland in 1985, was extremely successful. The organizers were delighted with both the quality and volume of the submissions for this meeting, and it was regrettable that more papers could not be accepted. Both the stimulating discussions and the excellent food and scenery of the Brittany coast made the meeting thoroughly enjoyable. There were three main foci for this workshop: the type systems suitable for databases (especially object-oriented and complex-object databases,) the representation and manipulation of persistent structures, and extensions to deductive databases that allow for more general and flexible programming. Many of the papers describe recent results, or work in progress, and are indicative of the latest research trends in database programming languages. The organizers are extremely grateful for the financial support given by CRAI (Italy), Altaïr (France) and AT&T (USA). We would also like to acknowledge the organizational help provided by Florence Deshors, Hélène Gans and Pauline Turcaud of Altaïr, and by Karen Carter of the University of Pennsylvania

    P-Pascal : a data-oriented persistent programming language

    Get PDF
    Bibliography: pages 187-199.Persistence is measured by the length of time an object is retained and is usable in a system. Persistent languages extend general purpose languages by providing the full range of persistence for data of any type. Moreover, data which remains on disk after program termination, is manipulated in the same way as transient data. As these languages are based on general purpose programming languages, they tend to be program-centred rather than data-centred. This thesis investigates the inclusion of data-oriented features in a persistent programming language. P-Pascal, a Persistent Pascal, has been designed and implemented to develop techniques for data clustering, metadata maintenance, security enforcement and bulk data management. It introduces type completeness to Pascal and in particular shows how a type-complete set constructor can be provided. This type is shown to be a practical and versatile mechanism for handling bulk data collections in a persistent environment. Relational algebra operators are provided and the automatic optimisation of set expressions is performed by the compiler and the runtime system. The P-Pascal Abstract Machine incorporates two complementary data placement strategies, automatic updating of type information, and metadata query facilities. The protection of data types, primary (named) objects and their individual components is supported. The challenges and opportunities presented by the persistent store organisation are discussed, and techniques for efficiently exploiting these properties are proposed. We also describe the effects on a data-oriented system of treating persistent and transient data alike, so that they cannot be distinguished statically. We conclude that object clustering, metadata maintenance and security enforcement can and should be incorporated in persistent programming languages. The provision of a built-in, type-complete bulk data constructor and its non-procedural operators is demonstrated. We argue that this approach is preferable to engineering such objects on top of a language, because of greater ease of use and considerable opportunity for automatic optimisation. The existence of such a type does not preclude programmers from constructing their own bulk objects using other types - this is but one advantage of a persistent language over a database system
    corecore