51,513 research outputs found

    Concurrent Programming

    Get PDF
    In this paper the main approaches to constructing concurrent programs will be presented and compared. As a basis for comparison. two examples of systems incorporating concurrent operations have been chosen. and programs for these examples will be presented using the different approaches to concurrent programming. Of particular interest are the semantic issues in language design. i.e. how the computation is expressed. rather than the detailed syntax of the languages. Hence. in the interest of uniformity. the example programs will be written in PASCAL [22] modified to include the necessary constructs. As will be seen. the different approaches to concurrent programming differ greatly in their expressive power. clarity of expression. and ease and efficiency of implementation

    GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application

    Get PDF
    The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants

    Design and verification of distributed tasking supervisors for concurrent programming languages

    Get PDF
    A tasking supervisor implements the concurrency constructs of a concurrent programming language. This thesis addresses two fundamental issues in constructing distributed implementations of a concurrent language: (1) Principles for designing a tasking supervisor for the language, and (2) Practical techniques for verifying that the supervisor correctly implements the semantics of the language. Previous research in concurrent languages has focused on the design of constructs for expressing concurrency, while ignoring these two important implementation issues. First, the thesis describes the design of a tasking supervisor for the Ada programming language. The Supervisor implements the full Ada tasking language, and it performs distributed program execution on multiple CPUs. The Supervisor is a portable, modular, distributed software system written in Ada. The interface between the Supervisor and application programs forms the topmost layer of the Supervisor and is formally specified in Anna (ANNotated Ada). All machine dependences are encapsulated in the bottom layer of the Supervisor; this layer is an implementation of an abstract virtual loosely coupled multiprocessor. The principles used to design the Supervisor may be used to design a distributed supervisor for any concurrent language. Second, the thesis presents new and practical techniques for automatically verifying the behavior of a distributed supervisor; these techniques are illustrated by the verification of the Distributed Ada Supervisor. An event-based formalization of the Ada tasking semantics is expressed as a collection of machine-processable specifications written in TSL (Task Sequencing Language). Correctness of the Supervisor is established by automatically checking executions of test programs for consistency with the TSL specifications. Since the specifications are derived solely from the Ada semantics, the specifications can be used to test any implementation of Ada tasking. In addition, every Ada tasking program may be used as test input. The theory and practice of concurrent programming is in its infancy. The research described in this thesis represents a major step toward the development of a theory of constructing multiprocessor implementations of concurrent programming languages

    Distributed Programming with Shared Data

    Get PDF
    Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed systems and their languages, however, has led to several new methodologies that blur this simple distinction. Operating system primitives (e.g., problem-oriented shared memory, Shared Virtual Memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared variable paradigm without the presence of physical shared memory. In this paper we will look at the reasons for this evolution, the resemblances and differences among these new proposals, and the key issues in their design and implementation. It turns out that many implementations are based on replication of data. We take this idea one step further, and discuss how automatic replication (initiated by the run time system) can be used as a basis for a new model, called the shared data-object model, whose semantics are similar to the shared variable model. Finally, we discuss the design of a new language for distributed programming, Orca, based on the shared data-object model. 1

    The role of concurrency in an evolutionary view of programming abstractions

    Full text link
    In this paper we examine how concurrency has been embodied in mainstream programming languages. In particular, we rely on the evolutionary talking borrowed from biology to discuss major historical landmarks and crucial concepts that shaped the development of programming languages. We examine the general development process, occasionally deepening into some language, trying to uncover evolutionary lineages related to specific programming traits. We mainly focus on concurrency, discussing the different abstraction levels involved in present-day concurrent programming and emphasizing the fact that they correspond to different levels of explanation. We then comment on the role of theoretical research on the quest for suitable programming abstractions, recalling the importance of changing the working framework and the way of looking every so often. This paper is not meant to be a survey of modern mainstream programming languages: it would be very incomplete in that sense. It aims instead at pointing out a number of remarks and connect them under an evolutionary perspective, in order to grasp a unifying, but not simplistic, view of the programming languages development process

    The CIAO Multi-Dialect Compiler and System: An Experimentation Workbench for Future (C)LP Systems

    Full text link
    CIAO is an advanced programming environment supporting Logic and Constraint programming. It offers a simple concurrent kernel on top of which declarative and non-declarative extensions are added via librarles. Librarles are available for supporting the ISOProlog standard, several constraint domains, functional and higher order programming, concurrent and distributed programming, internet programming, and others. The source language allows declaring properties of predicates via assertions, including types and modes. Such properties are checked at compile-time or at run-time. The compiler and system architecture are designed to natively support modular global analysis, with the two objectives of proving properties in assertions and performing program optimizations, including transparently exploiting parallelism in programs. The purpose of this paper is to report on recent progress made in the context of the CIAO system, with special emphasis on the capabilities of the compiler, the techniques used for supporting such capabilities, and the results in the áreas of program analysis and transformation already obtained with the system

    Functional Programming for Embedded Systems

    Get PDF
    Embedded Systems application development has traditionally been carried out in low-level machine-oriented programming languages like C or Assembler that can result in unsafe, error-prone and difficult-to-maintain code. Functional programming with features such as higher-order functions, algebraic data types, polymorphism, strong static typing and automatic memory management appears to be an ideal candidate to address the issues with low-level languages plaguing embedded systems. However, embedded systems usually run on heavily memory-constrained devices with memory in the order of hundreds of kilobytes and applications running on such devices embody the general characteristics of being (i) I/O- bound, (ii) concurrent and (iii) timing-aware. Popular functional language compilers and runtimes either do not fare well with such scarce memory resources or do not provide high-level abstractions that address all the three listed characteristics. This work attempts to address this gap by investigating and proposing high-level abstractions specialised for I/O-bound, concurrent and timing-aware embedded-systems programs. We implement the proposed abstractions on eagerly-evaluated, statically-typed functional languages running natively on microcontrollers. Our contributions are divided into two parts - Part 1 presents a functional reactive programming language - Hailstorm - that tracks side effects like I/O in its type system using a feature called resource types. Hailstorm’s programming model is illustrated on the GRiSP microcontroller board.Part 2 comprises two papers that describe the design and implementation of Synchron, a runtime API that provides a uniform message-passing framework for the handling of software messages as well as hardware interrupts. Additionally, the Synchron API supports a novel timing operator to capture the notion of time, common in embedded applications. The Synchron API is implemented as a virtual machine - SynchronVM - that is run on the NRF52 and STM32 microcontroller boards. We present programming examples that illustrate the concurrency, I/O and timing capabilities of the VM and provide various benchmarks on the response time, memory and power usage of SynchronVM
    corecore