157 research outputs found

    Maya: multiple-dispatch syntax extension in Java

    Get PDF
    technical reportWe have designed and implemented Maya, a version of Java that allows programmers to extend and reinterpret its syntax. Maya generalizes macro systems by treating grammar productions as generic functions, and semantic actions on productions as multimethods on the corresponding generic functions. Programmers can write new generic functions (i.e., grammar productions) and new multimethods (i.e., semantic actions), through which they can extend the grammar of the language and change the semantics of its syntactic constructs, respectively. Maya?s multimethods are compile-time metaprograms that transform abstract syntax: they execute at program compile-time, because they are semantic actions executed by the parser. Maya?s multimethods can be dispatched on the syntactic structure of the input, as well as the static, source-level types of expressions in the input. In this paper we describe what Maya can do and how it works. We describe how its novel parsing techniques work and how Maya can statically detect certain kinds of errors such as hygiene violations. Finally, to demonstrate Maya's expressiveness, we describe how Maya can be used to implement the MultiJava language, which was described by Clifton et al. at OOPSLA 2000

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Extending functional databases for use in text-intensive applications

    Get PDF
    This thesis continues research exploring the benefits of using functional databases based around the functional data model for advanced database applications-particularly those supporting investigative systems. This is a growing generic application domain covering areas such as criminal and military intelligence, which are characterised by significant data complexity, large data sets and the need for high performance, interactive use. An experimental functional database language was developed to provide the requisite semantic richness. However, heavy use in a practical context has shown that language extensions and implementation improvements are required-especially in the crucial areas of string matching and graph traversal. In addition, an implementation on multiprocessor, parallel architectures is essential to meet the performance needs arising from existing and projected database sizes in the chosen application area. [Continues.

    Implementation of Web Query Languages Reconsidered

    Get PDF
    Visions of the next generation Web such as the "Semantic Web" or the "Web 2.0" have triggered the emergence of a multitude of data formats. These formats have different characteristics as far as the shape of data is concerned (for example tree- vs. graph-shaped). They are accompanied by a puzzlingly large number of query languages each limited to one data format. Thus, a key feature of the Web, namely to make it possible to access anything published by anyone, is compromised. This thesis is devoted to versatile query languages capable of accessing data in a variety of Web formats. The issue is addressed from three angles: language design, common, yet uniform semantics, and common, yet uniform evaluation. % Thus it is divided in three parts: First, we consider the query language Xcerpt as an example of the advocated class of versatile Web query languages. Using this concrete exemplar allows us to clarify and discuss the vision of versatility in detail. Second, a number of query languages, XPath, XQuery, SPARQL, and Xcerpt, are translated into a common intermediary language, CIQLog. This language has a purely logical semantics, which makes it easily amenable to optimizations. As a side effect, this provides the, to the best of our knowledge, first logical semantics for XQuery and SPARQL. It is a very useful tool for understanding the commonalities and differences of the considered languages. Third, the intermediate logical language is translated into a query algebra, CIQCAG. The core feature of CIQCAG is that it scales from tree- to graph-shaped data and queries without efficiency losses when tree-data and -queries are considered: it is shown that, in these cases, optimal complexities are achieved. CIQCAG is also shown to evaluate each of the aforementioned query languages with a complexity at least as good as the best known evaluation methods so far. For example, navigational XPath is evaluated with space complexity O(q d) and time complexity O(q n) where q is the query size, n the data size, and d the depth of the (tree-shaped) data. CIQCAG is further shown to provide linear time and space evaluation of tree-shaped queries for a larger class of graph-shaped data than any method previously proposed. This larger class of graph-shaped data, called continuous-image graphs, short CIGs, is introduced for the first time in this thesis. A (directed) graph is a CIG if its nodes can be totally ordered in such a manner that, for this order, the children of any node form a continuous interval. CIQCAG achieves these properties by employing a novel data structure, called sequence map, that allows an efficient evaluation of tree-shaped queries, or of tree-shaped cores of graph-shaped queries on any graph-shaped data. While being ideally suited to trees and CIGs, the data structure gracefully degrades to unrestricted graphs. It yields a remarkably efficient evaluation on graph-shaped data that only a few edges prevent from being trees or CIGs

    Extensible Languages for Flexible and Principled Domain Abstraction

    Get PDF
    Die meisten Programmiersprachen werden als Universalsprachen entworfen. Unabhängig von der zu entwickelnden Anwendung, stellen sie die gleichen Sprachfeatures und Sprachkonstrukte zur Verfügung. Solch universelle Sprachfeatures ignorieren jedoch die spezifischen Anforderungen, die viele Softwareprojekte mit sich bringen. Als Gegenkraft zu Universalsprachen fördern domänenspezifische Programmiersprachen, modellgetriebene Softwareentwicklung und sprachorientierte Programmierung die Verwendung von Domänenabstraktion, welche den Einsatz von domänenspezifischen Sprachfeatures und Sprachkonstrukten ermöglicht. Insbesondere erlaubt Domänenabstraktion Programmieren auf dem selben Abstraktionsniveau zu programmieren wie zu denken und vermeidet dadurch die Notwendigkeit Domänenkonzepte mit universalsprachlichen Features zu kodieren. Leider ermöglichen aktuelle Ansätze zur Domänenabstraktion nicht die Entfaltung ihres ganzen Potentials. Einerseits mangelt es den Ansätzen für interne domänenspezifische Sprachen an Flexibilität bezüglich der Syntax, statischer Analysen, und Werkzeugunterstützung, was das tatsächlich erreichte Abstraktionsniveau beschränkt. Andererseits mangelt es den Ansätzen für externe domänenspezifische Sprachen an wichtigen Prinzipien, wie beispielsweise modularem Schließen oder Komposition von Domänenabstraktionen, was die Anwendbarkeit dieser Ansätze in der Entwicklung größerer Softwaresysteme einschränkt. Wir verfolgen in der vorliegenden Doktorarbeit einen neuartigen Ansatz, welcher die Vorteile von internen und externen domänenspezifischen Sprachen vereint um flexible und prinzipientreue Domänenabstraktion zu unterstützen. Wir schlagen bibliotheksbasierte erweiterbare Programmiersprachen als Grundlage für Domänenabstraktion vor. In einer erweiterbaren Sprache kann Domänenabstraktion durch die Erweiterung der Sprache mit domänenspezifischer Syntax, statischer Analyse, und Werkzeugunterstützung erreicht werden . Dies ermöglicht Domänenabstraktionen die selbe Flexibilität wie externe domänenspezifische Sprachen. Um die Einhaltung üblicher Prinzipien zu gewährleisten, organisieren wir Spracherweiterungen als Bibliotheken und verwenden einfache Import-Anweisungen zur Aktivierung von Erweiterungen. Dies erlaubt modulares Schließen (durch die Inspektion der Import-Anweisungen), unterstützt die Komposition von Domänenabstraktionen (durch das Importieren mehrerer Erweiterungen), und ermöglicht die uniforme Selbstanwendbarkeit von Spracherweiterungen in der Entwicklung zukünftiger Erweiterungen (durch das Importieren von Erweiterungen in einer Erweiterungsdefinition). Die Organisation von Erweiterungen in Form von Bibliotheken ermöglicht Domänenabstraktionen die selbe Prinzipientreue wie interne domänenspezifische Sprachen. Wir haben die bibliotheksbasierte erweiterbare Programmiersprache SugarJ entworfen und implementiert. SugarJ Bibliotheken können Erweiterungen der Syntax, der statischen Analyse, und der Werkzeugunterstützung von SugarJ deklarieren. Eine syntaktische Erweiterung besteht dabei aus einer erweiterten Syntax und einer Transformation der erweiterten Syntax in die Basissyntax von SugarJ. Eine Erweiterung der Analyse testet Teile des abstrakten Syntaxbaums der aktuellen Datei und produziert eine Liste von Fehlern. Eine Erweiterung der Werkzeugunterstützung deklariert Dienste wie Syntaxfärbung oder Codevervollständigung für bestimmte Sprachkonstrukte. SugarJ Erweiterungen sind vollkommen selbstanwendbar: Eine erweiterte Syntax kann in eine Erweiterungsdefinition transformiert werden, eine erweiterte Analyse kann Erweiterungsdefinitionen testen, und eine erweiterte Werkzeugunterstützung kann Entwicklern beim Definieren von Erweiterungen assistieren. Um eine Quelldatei mit Erweiterungen zu verarbeiten, inspizieren der SugarJ Compiler und die SugarJ IDE die importierten Bibliotheken um die aktiven Erweiterungen zu bestimmen. Der Compiler und die IDE adaptieren den Parser, den Codegenerator, die Analyseroutine und die Werkzeugunterstützung der Quelldatei entsprechend der aktiven Erweiterungen. Wir beschreiben in der vorliegenden Doktorarbeit nicht nur das Design und die Implementierung von SugarJ, sondern berichten darüber hinaus über Erweiterungen unseres ursprünglich Designs. Insbesondere haben wir eine Generalisierung des SugarJ Compilers entworfen und implementiert, die neben Java alternative Basissprachen unterstützt. Wir haben diese Generalisierung verwendet um die bibliotheksbasierten erweiterbaren Programmiersprachen SugarHaskell, SugarProlog, und SugarFomega zu entwickeln. Weiterhin haben wir SugarJ ergänzt um polymorphe Domänenabstraktion und Kommunikationsintegrität zu unterstützen. Polymorphe Domänenabstraktion ermöglicht Programmierern mehrere Transformationen für die selbe domänenspezifische Syntax bereitzustellen. Dies erhöht die Flexibilität von SugarJ und unterstützt bekannte Szenarien aus der modellgetriebenen Entwicklung. Kommunikationsintegrität spezifiziert, dass die Komponenten eines Softwaresystems nur über explizite Kanäle kommunizieren dürfen. Im Kontext von Codegenerierung stellt dies eine interessante Eigenschaft dar, welche die Generierung von impliziten Modulabhängigkeiten untersagt. Wir haben Kommunikationsintegrität als weiteres Prinzip zu SugarJ hinzugefügt. Basierend auf SugarJ und zahlreicher Fallstudien argumentieren wir, dass flexible und prinzipientreue Domänenabstraktion ein skalierbares Programmiermodell für die Entwicklung komplexer Softwaresysteme darstellt

    A semantic and agent-based approach to support information retrieval, interoperability and multi-lateral viewpoints for heterogeneous environmental databases

    Get PDF
    PhDData stored in individual autonomous databases often needs to be combined and interrelated. For example, in the Inland Water (IW) environment monitoring domain, the spatial and temporal variation of measurements of different water quality indicators stored in different databases are of interest. Data from multiple data sources is more complex to combine when there is a lack of metadata in a computation forin and when the syntax and semantics of the stored data models are heterogeneous. The main types of information retrieval (IR) requirements are query transparency and data harmonisation for data interoperability and support for multiple user views. A combined Semantic Web based and Agent based distributed system framework has been developed to support the above IR requirements. It has been implemented using the Jena ontology and JADE agent toolkits. The semantic part supports the interoperability of autonomous data sources by merging their intensional data, using a Global-As-View or GAV approach, into a global semantic model, represented in DAML+OIL and in OWL. This is used to mediate between different local database views. The agent part provides the semantic services to import, align and parse semantic metadata instances, to support data mediation and to reason about data mappings during alignment. The framework has applied to support information retrieval, interoperability and multi-lateral viewpoints for four European environmental agency databases. An extended GAV approach has been developed and applied to handle queries that can be reformulated over multiple user views of the stored data. This allows users to retrieve data in a conceptualisation that is better suited to them rather than to have to understand the entire detailed global view conceptualisation. User viewpoints are derived from the global ontology or existing viewpoints of it. This has the advantage that it reduces the number of potential conceptualisations and their associated mappings to be more computationally manageable. Whereas an ad hoc framework based upon conventional distributed programming language and a rule framework could be used to support user views and adaptation to user views, a more formal framework has the benefit in that it can support reasoning about the consistency, equivalence, containment and conflict resolution when traversing data models. A preliminary formulation of the formal model has been undertaken and is based upon extending a Datalog type algebra with hierarchical, attribute and instance value operators. These operators can be applied to support compositional mapping and consistency checking of data views. The multiple viewpoint system was implemented as a Java-based application consisting of two sub-systems, one for viewpoint adaptation and management, the other for query processing and query result adjustment
    corecore