92,042 research outputs found

    The End of History? Using a Proof Assistant to Replace Language Design with Library Design

    Get PDF
    Functionality of software systems has exploded in part because of advances in programming-language support for packaging reusable functionality as libraries. Developers benefit from the uniformity that comes of exposing many interfaces in the same language, as opposed to stringing together hodgepodges of command-line tools. Domain-specific languages may be viewed as an evolution of the power of reusable interfaces, when those interfaces become so flexible as to deserve to be called programming languages. However, common approaches to domain-specific languages give up many of the hard-won advantages of library-building in a rich common language, and even the traditional approach poses significant challenges in learning new APIs. We suggest that instead of continuing to develop new domain-specific languages, our community should embrace library-based ecosystems within very expressive languages that mix programming and theorem proving. Our prototype framework Fiat, a library for the Coq proof assistant, turns languages into easily comprehensible libraries via the key idea of modularizing functionality and performance away from each other, the former via macros that desugar into higher-order logic and the latter via optimization scripts that derive efficient code from logical programs

    Session-Based Programming for Parallel Algorithms: Expressiveness and Performance

    Full text link
    This paper investigates session programming and typing of benchmark examples to compare productivity, safety and performance with other communications programming languages. Parallel algorithms are used to examine the above aspects due to their extensive use of message passing for interaction, and their increasing prominence in algorithmic research with the rising availability of hardware resources such as multicore machines and clusters. We contribute new benchmark results for SJ, an extension of Java for type-safe, binary session programming, against MPJ Express, a Java messaging system based on the MPI standard. In conclusion, we observe that (1) despite rich libraries and functionality, MPI remains a low-level API, and can suffer from commonly perceived disadvantages of explicit message passing such as deadlocks and unexpected message types, and (2) the benefits of high-level session abstraction, which has significant impact on program structure to improve readability and reliability, and session type-safety can greatly facilitate the task of communications programming whilst retaining competitive performance

    Enhancing iNetTest by Improving the Programming Question and Group Grading

    Get PDF
    This report describes an improvement to the Utah State University iNetTest testing system. The iNetTest system allows instructors and/or students to: • Create/take tests with rich sets of question types (multiple choice, essay, true/false, computational programming question, etc.); • Monitor the test takers for cheating; • Auto-grade for many types of questions, as well as group grade for all question types; and • Send scores to students via either email or SMS. Specifically, this report discusses the design and development of an improved computational programming question for the iNetTest system. For programming questions, iNetTest allows for the use of various programming languages including some scripting languages. The improved system makes grading faster and more straightforward by assessing all students’ answers automatically. All enhancements described herein improve iNetTest’s functionality and implement new security layers that protect against the misuse of features and/or functionality. This report also describes the layered architecture used to build the iNetTest system, including several new technologies, such as Ajax[4] and JavaScript Frameworks[5]. MVC frameworks[1] and socket programming[10] are also discussed and compared. Finally, this report discusses how the system was tested and projects future enhancements to the system

    Development of DAL and DAPL languages for building distributed applications

    Full text link
    A common characteristic among parallel/distributed programming languages is that the one language is used to specify not only the overall organisation of the distributed application, but also the functionality of the application. That is, the connectivity and functionality of processes are specified within a single program. Connectivity and functionality are independent aspects of a distributed application. This thesis shows that these two aspects can be specified separately, therefore allowing application designers to freely concentrate on either aspect in a modular fashion. Two new programming languages have been developed for specifying each aspect. These languages are for loosely coupled distributed applications based on message passing, and have been designed to simplify distributed programming by completely removing all low level interprocess communication. A suite of languages and tools has been designed and developed. It includes the two new languages, parsers, a compilation system to generate intermediate C code that is compiled to binary object modules, a run-time system to create, manage and terminate several distributed applications, and a shell to communicate with the run-tune system. DAL (Distributed Application Language) and DAPL (Distributed Application Process Language) are the new programming languages for the specification and development of process oriented, asynchronous message passing, distributed applications. These two languages have been designed and developed as part of this doctorate in order to specify such distributed applications that execute on a cluster of computers. Both languages are used to specify orthogonal components of an application, on the one hand the organisation of processes that constitute an application, and on the other the interface and functionality of each process. Consequently, these components can be created in a modular fashion, individually and concurrently. The DAL language is used to specify not only the connectivity of all processes within an application, but also a cluster of computers for which the application executes. Furthermore, sub-clusters can be specified for individual processes of an application to constrain a process to a particular group of computers. The second language, DAPL, is used to specify the interface, functionality and data structures of application processes. In addition to these languages, a DAL parser, a DAPL parser, and a compilation system have been designed and developed (in this project). This compilation system takes DAL and DAPL programs to generate object modules based on machine code, one module for each application process. These object modules are used by the Distributed Application System (DAS) to instantiate and manage distributed applications. The DAS system is another new component of this project. The purpose of the DAS system is to create, manage, and terminate many distributed applications of similar and different configurations. The creation procedure incorporates the automatic allocation of processes to remote machines. Application management includes several operations such as deletion, addition, replacement, and movement of processes, and also detection and reaction to faults such as a processor crash. A DAS operator communicates with the DAS system via a textual shell called DASH (Distributed Application SHell). This suite of languages and tools allowed distributed applications of varying connectivity and functionality to be specified quickly and simply at a high level of abstraction. DAL and DAPL programs of several processes may require a few dozen lines to specify as compared to several hundred lines of equivalent C code that is generated by the compilation system. Furthermore, the DAL and DAPL compilation system is successful at generating binary object modules, and the DAS system succeeds in instantiating and managing several distributed applications on a cluster

    Julia Programming Language Benchmark Using a Flight Simulation

    Get PDF
    Julias goal to provide scripting language ease-of-coding with compiled language speed is explored. The runtime speed of the relatively new Julia programming language is assessed against other commonly used languages including Python, Java, and C++. An industry-standard missile and rocket simulation, coded in multiple languages, was used as a test bench for runtime speed. All language versions of the simulation, including Julia, were coded to a highly-developed object-oriented simulation architecture tailored specifically for time-domain flight simulation. A speed-of-coding second-dimension is plotted against runtime for each language to portray a space that characterizes Julias scripting language efficiencies in the context of the other languages. With caveats, Julia runtime speed was found to be in the class of compiled or semi-compiled languages. However, some factors that affect runtime speed at the cost of ease-of-coding are shown. Julias built-in functionality for multi-core processing is briefly examined as a means for obtaining even faster runtime speed. The major contribution of this research to the extensive language benchmarking body-of-work is comparing Julia to other mainstream languages using a complex flight simulation as opposed to benchmarking with single algorithms

    MetaBETA: Model and Implementation

    Get PDF
    Object-oriented programming languages are excellent for expressing abstractions in many application domains. The object-oriented programming methodology allows real-world concepts to modelled in an easy and direct fashion and it supports refinement of concepts. However, many object-oriented languages and their implementations fall short in two areas: dynamic extensibility and reflection.Dynamic extensibility is the ability to incorporate new classes into an application at runtime. Reflection makes it possible for a language to extend its own domain, e.g., to build type-orthogonal functionality. MetaBETA is an extension of the BETA language that supports dynamic extensibility and reflection. MetaBETA has a metalevel interface that provides access to the state of a running application and to the default implementation of language primities.This report presents the model behind MetaBETA. In particular, we discuss the execution model of a MetaBETA program and how type- orthogonal abstractions can be built. This includes precentation of dynamic slots, a mechanism that makes is possible ectend objects at runtime. The other main area covered in this report is the implementation of MetaBETA. The central component of the architecture is a runtime system, which is viewed as a virtual machine whose baselevel interface implements the functionality needed by the programming language
    • …
    corecore