286,793 research outputs found

    Top-down design in the context of parallel programs

    Get PDF
    A class of parallel programs, based on Free Choice Petri nets, is modeled by associating operators and predicates with vertices of the net. The model, called a formal parallel program (FPP), forms a natural extension of flow-chart notation to parallel programs. Definitions are made of the behaviour of an FPP, and the simulation of one FPP by another. A class of top-down FPPs is next defined, by requiring program graphs to be obtained through successive refinement steps, using a restricted set of control structures. Using the above definitions, it is shown that there exists an FPP ℰ satisfying the property that for any top-down FPP ℰ′ simulating ℰ, the degree of parallelism attainable in ℰ′ is smaller than that in ℰ. The measure of parallelism used is the number of different ways of carrying out a computation. In the case of parallel programs, this phenomenon of loss of parallelism therefore uncovers a performance factor which may offset some of the advantages of using top-down design

    Another way of thinking: creativity and conformity

    Get PDF
    This paper explores possible tactics for academics working within a context of regulation and constraint. One tactic we suggest is moving outside of a creativity/conformity binary. Rather than understanding creativity and conformity as separate, where one is understood as excluding the other, we discuss the potential of examining the relationships between them. We use the theme of ‘structure and play’ to illustrate our argument. In the first part of the paper using various examples from art and design, fields generally associated with creativity, we explore the interrelatedness of creativity and conformity. For example, how might design styles, which are generally understood as creative outcomes, constrain creativity and lead to conformity within the design field? Is fashion producing creativity or conformity? Conversely, the ways conformity provides the conditions for creativity are also examined. For example, the conformity imposed by the State on artists within the communist block and how this contributed to a thriving underground arts movement which challenged conformity and State regulation. Continuing the theme of ‘structure and play’ we provide a story from an Australian university which offers insight into the ongoing renegotiation of power in the academy. This account illustrates the ways programmatic government within the university, with the aim of regulating conduct, contributed to unanticipated outcomes. We propose that a relational view of power is useful for academics operating in the current higher education context as it brings into view sites where power might begin to be renegotiated

    The CIAO Multi-Dialect Compiler and System: An Experimentation Workbench for Future (C)LP Systems

    Full text link
    CIAO is an advanced programming environment supporting Logic and Constraint programming. It offers a simple concurrent kernel on top of which declarative and non-declarative extensions are added via librarles. Librarles are available for supporting the ISOProlog standard, several constraint domains, functional and higher order programming, concurrent and distributed programming, internet programming, and others. The source language allows declaring properties of predicates via assertions, including types and modes. Such properties are checked at compile-time or at run-time. The compiler and system architecture are designed to natively support modular global analysis, with the two objectives of proving properties in assertions and performing program optimizations, including transparently exploiting parallelism in programs. The purpose of this paper is to report on recent progress made in the context of the CIAO system, with special emphasis on the capabilities of the compiler, the techniques used for supporting such capabilities, and the results in the áreas of program analysis and transformation already obtained with the system

    Synthesis of Recursive ADT Transformations from Reusable Templates

    Full text link
    Recent work has proposed a promising approach to improving scalability of program synthesis by allowing the user to supply a syntactic template that constrains the space of potential programs. Unfortunately, creating templates often requires nontrivial effort from the user, which impedes the usability of the synthesizer. We present a solution to this problem in the context of recursive transformations on algebraic data-types. Our approach relies on polymorphic synthesis constructs: a small but powerful extension to the language of syntactic templates, which makes it possible to define a program space in a concise and highly reusable manner, while at the same time retains the scalability benefits of conventional templates. This approach enables end-users to reuse predefined templates from a library for a wide variety of problems with little effort. The paper also describes a novel optimization that further improves the performance and scalability of the system. We evaluated the approach on a set of benchmarks that most notably includes desugaring functions for lambda calculus, which force the synthesizer to discover Church encodings for pairs and boolean operations

    Rumble: Data Independence for Large Messy Data Sets

    Full text link
    This paper introduces Rumble, an engine that executes JSONiq queries on large, heterogeneous and nested collections of JSON objects, leveraging the parallel capabilities of Spark so as to provide a high degree of data independence. The design is based on two key insights: (i) how to map JSONiq expressions to Spark transformations on RDDs and (ii) how to map JSONiq FLWOR clauses to Spark SQL on DataFrames. We have developed a working implementation of these mappings showing that JSONiq can efficiently run on Spark to query billions of objects into, at least, the TB range. The JSONiq code is concise in comparison to Spark's host languages while seamlessly supporting the nested, heterogeneous data sets that Spark SQL does not. The ability to process this kind of input, commonly found, is paramount for data cleaning and curation. The experimental analysis indicates that there is no excessive performance loss, occasionally even a gain, over Spark SQL for structured data, and a performance gain over PySpark. This demonstrates that a language such as JSONiq is a simple and viable approach to large-scale querying of denormalized, heterogeneous, arborescent data sets, in the same way as SQL can be leveraged for structured data sets. The results also illustrate that Codd's concept of data independence makes as much sense for heterogeneous, nested data sets as it does on highly structured tables.Comment: Preprint, 9 page

    Proceedings of the 3rd Workshop on Domain-Specific Language Design and Implementation (DSLDI 2015)

    Full text link
    The goal of the DSLDI workshop is to bring together researchers and practitioners interested in sharing ideas on how DSLs should be designed, implemented, supported by tools, and applied in realistic application contexts. We are both interested in discovering how already known domains such as graph processing or machine learning can be best supported by DSLs, but also in exploring new domains that could be targeted by DSLs. More generally, we are interested in building a community that can drive forward the development of modern DSLs. These informal post-proceedings contain the submitted talk abstracts to the 3rd DSLDI workshop (DSLDI'15), and a summary of the panel discussion on Language Composition

    Mira: A Framework for Static Performance Analysis

    Full text link
    The performance model of an application can pro- vide understanding about its runtime behavior on particular hardware. Such information can be analyzed by developers for performance tuning. However, model building and analyzing is frequently ignored during software development until perfor- mance problems arise because they require significant expertise and can involve many time-consuming application runs. In this paper, we propose a fast, accurate, flexible and user-friendly tool, Mira, for generating performance models by applying static program analysis, targeting scientific applications running on supercomputers. We parse both the source code and binary to estimate performance attributes with better accuracy than considering just source or just binary code. Because our analysis is static, the target program does not need to be executed on the target architecture, which enables users to perform analysis on available machines instead of conducting expensive exper- iments on potentially expensive resources. Moreover, statically generated models enable performance prediction on non-existent or unavailable architectures. In addition to flexibility, because model generation time is significantly reduced compared to dynamic analysis approaches, our method is suitable for rapid application performance analysis and improvement. We present several scientific application validation results to demonstrate the current capabilities of our approach on small benchmarks and a mini application
    corecore