5,281 research outputs found

    Efficient and Reasonable Object-Oriented Concurrency

    Full text link
    Making threaded programs safe and easy to reason about is one of the chief difficulties in modern programming. This work provides an efficient execution model for SCOOP, a concurrency approach that provides not only data race freedom but also pre/postcondition reasoning guarantees between threads. The extensions we propose influence both the underlying semantics to increase the amount of concurrent execution that is possible, exclude certain classes of deadlocks, and enable greater performance. These extensions are used as the basis an efficient runtime and optimization pass that improve performance 15x over a baseline implementation. This new implementation of SCOOP is also 2x faster than other well-known safe concurrent languages. The measurements are based on both coordination-intensive and data-manipulation-intensive benchmarks designed to offer a mixture of workloads.Comment: Proceedings of the 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE '15). ACM, 201

    From Events to Reactions: A Progress Report

    Full text link
    Syndicate is a new coordinated, concurrent programming language. It occupies a novel point on the spectrum between the shared-everything paradigm of threads and the shared-nothing approach of actors. Syndicate actors exchange messages and share common knowledge via a carefully controlled database that clearly scopes conversations. This approach clearly simplifies coordination of concurrent activities. Experience in programming with Syndicate, however, suggests a need to raise the level of linguistic abstraction. In addition to writing event handlers and managing event subscriptions directly, the language will have to support a reactive style of programming. This paper presents event-oriented Syndicate programming and then describes a preliminary design for augmenting it with new reactive programming constructs.Comment: In Proceedings PLACES 2016, arXiv:1606.0540

    Deductive Verification of Parallel Programs Using Why3

    Full text link
    The Message Passing Interface specification (MPI) defines a portable message-passing API used to program parallel computers. MPI programs manifest a number of challenges on what concerns correctness: sent and expected values in communications may not match, resulting in incorrect computations possibly leading to crashes; and programs may deadlock resulting in wasted resources. Existing tools are not completely satisfactory: model-checking does not scale with the number of processes; testing techniques wastes resources and are highly dependent on the quality of the test set. As an alternative, we present a prototype for a type-based approach to programming and verifying MPI like programs against protocols. Protocols are written in a dependent type language designed so as to capture the most common primitives in MPI, incorporating, in addition, a form of primitive recursion and collective choice. Protocols are then translated into Why3, a deductive software verification tool. Source code, in turn, is written in WhyML, the language of the Why3 platform, and checked against the protocol. Programs that pass verification are guaranteed to be communication safe and free from deadlocks. We verified several parallel programs from textbooks using our approach, and report on the outcome.Comment: In Proceedings ICE 2015, arXiv:1508.0459

    Parameterized Concurrent Multi-Party Session Types

    Full text link
    Session types have been proposed as a means of statically verifying implementations of communication protocols. Although prior work has been successful in verifying some classes of protocols, it does not cope well with parameterized, multi-actor scenarios with inherent asynchrony. For example, the sliding window protocol is inexpressible in previously proposed session type systems. This paper describes System-A, a new typing language which overcomes many of the expressiveness limitations of prior work. System-A explicitly supports asynchrony and parallelism, as well as multiple forms of parameterization. We define System-A and show how it can be used for the static verification of a large class of asynchronous communication protocols.Comment: In Proceedings FOCLASA 2012, arXiv:1208.432

    Architecture independent environment for developing engineering software on MIMD computers

    Get PDF
    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management

    Internet of Things Cloud: Architecture and Implementation

    Full text link
    The Internet of Things (IoT), which enables common objects to be intelligent and interactive, is considered the next evolution of the Internet. Its pervasiveness and abilities to collect and analyze data which can be converted into information have motivated a plethora of IoT applications. For the successful deployment and management of these applications, cloud computing techniques are indispensable since they provide high computational capabilities as well as large storage capacity. This paper aims at providing insights about the architecture, implementation and performance of the IoT cloud. Several potential application scenarios of IoT cloud are studied, and an architecture is discussed regarding the functionality of each component. Moreover, the implementation details of the IoT cloud are presented along with the services that it offers. The main contributions of this paper lie in the combination of the Hypertext Transfer Protocol (HTTP) and Message Queuing Telemetry Transport (MQTT) servers to offer IoT services in the architecture of the IoT cloud with various techniques to guarantee high performance. Finally, experimental results are given in order to demonstrate the service capabilities of the IoT cloud under certain conditions.Comment: 19pages, 4figures, IEEE Communications Magazin

    Distributed computing system with dual independent communications paths between computers and employing split tokens

    Get PDF
    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided
    • …
    corecore