332 research outputs found

    Type Oriented Parallel Programming

    Get PDF
    Context: Parallel computing is an important field within the sciences. With the emergence of multi, and soon many, core CPUs this is moving more and more into the domain of general computing. HPC programmers want performance, but at the moment this comes at a cost; parallel languages are either efficient or conceptually simple, but not both. Aim: To develop and evaluate a novel programming paradigm which will address the problem of parallel programming and allow for languages which are both conceptually simple and efficient. Method: A type-based approach, which allows the programmer to control all aspects of parallelism by the use and combination of types has been developed. As a vehicle to present and analyze this new paradigm a parallel language, Mesham, and associated compilation tools have also been created. By using types to express parallelism the programmer can exercise efficient, flexible control in a high level abstract model yet with a sufficiently rich amount of information in the source code upon which the compiler can perform static analysis and optimization. Results: A number of case studies have been implemented in Mesham. Official benchmarks have been performed which demonstrate the paradigm allows one to write code which is comparable, in terms of performance, with existing high performance solutions. Sections of the parallel simulation package, Gadget-2, have been ported into Mesham, where substantial code simplifications have been made. Conclusions: The results obtained indicate that the type-based approach does satisfy the aim of the research described in this thesis. By using this new paradigm the programmer has been able to write parallel code which is both simple and efficient

    LoKit (revisited): A Toolkit for Building Distributed Collaborative Applications

    Full text link
    LoKit is a toolkit based on the coordination language LO. It allows to build distributed collaborative applications by providing a set of generic tools. This paper briefly introduces the concept of the toolkit, presents a subset of the LoKit tools, and finally demonstrates its power by discussing a sample application built with the toolkit.Comment: 20 pages, 3 figures, 1 table. This paper is a reprint of an unpublished report on the occasion of the (fictitious) 30th anniversary of the Xerox Research Centre Europe, now Naver Labs, Grenoble, Franc

    Programming Languages for Distributed Computing Systems

    Get PDF
    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper. We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give the flavor of each. These examples include languages based on message passing, rendezvous, remote procedure call, objects, and atomic transactions, as well as functional languages, logic languages, and distributed data structure languages. The paper concludes with a comprehensive bibliography listing over 200 papers on nearly 100 distributed programming languages

    Exploring the Key Determinants of Bicycle Share Program Use in a Leisure Context

    Get PDF
    Over the past two decades, bicycle share programs (BSPs) have developed rapidly around the world, with studies finding that people use such service not only for commuting but also for leisure. However, compared to utilitarian BSP users, limited research has focused on the factors influencing BSP use for leisure experiences. To begin this limitation in the current cycling literature, this dissertation explores the key determinants of leisure BSP use. The extended unified theory of acceptance and use of technology proposed by Venkatesh, Thong, and Xu (2012) and the dual-attitudes model conceptualized by Wilson, Lindsey, and Schooler (2000) provided the theoretical framework guiding this research. First, this dissertation developed the Unified Measurement of Bicycle Share Program Use (UMBSPU), an encompassing scale for further investigation of factors influencing an individual\u27s leisure BSP use. The results of the measurement invariance testing and method effect examination indicated that this scale, which includes eight constructs and thirty-three measurement items, is a reliable, valid measurement. Second, this dissertation applied the UMBSPU to examine the influences of performance expectancy, effort expectancy, facilitating conditions, social influence, price value, hedonic motivation, and habit on Taipei citizens\u27 intentions to use BSP and their actual use in leisure time. Among all factors examined, habit demonstrated the strongest predict validity of use intention. Furthermore, behavioral intention outperformed habit and facilitating conditions in explaining the variance of actual use. Finally, this dissertation used two Single Target Implicit Association Tests (ST-IATs) to explore BSP users\u27 implicit attitudes toward leisure cycling and leisure cyclists. Explicit attitudes toward leisure cycling and social identity with leisure cyclists were also measured and compared with implicit attitudes, the results indicating that implicit attitudes did not significantly predict leisure BSP use. However, social identity exhibited a strong predictability of an individual\u27s public bicycle riding frequency. Future research is needed to cross-validate the UMBSPU in different contexts and to compare the results from the leisure cycling and cyclists ST-IAT across different types of cyclist groups

    Profiling large-scale lazy functional programs

    Get PDF
    The LOLITA natural language processing system is an example of one of the ever increasing number of large-scale systems written entirely in a functional programming language. The system consists of over 50,000 lines of Haskell code and is able to perform a number of tasks such as semantic and pragmatic analysis of text, context scanning and query analysis. Such a system is more useful if the results are calculated in real-time, therefore the efficiency of such a system is paramount. For the past three years we have used profiling tools supplied with the Haskell compilers GHC and HBC to analyse and reason about our programming solutions and have achieved good results; however, our experience has shown that the profiling life-cycle is often too long to make a detailed analysis of a large system possible, and the profiling results are often misleading. A profiling system is developed which allows three types of functionality not previously found in a profiler for lazy functional programs. Firstly, the profiler is able to produce results based on an accurate method of cost inheritance. We have found that this reduces the possibility of the programmer obtaining misleading profiling results. Secondly, the programmer is able to explore the results after the execution of the program. This is done by selecting and deselecting parts of the program using a post-processor. This greatly reduces the analysis time as no further compilation, execution or profiling of the program is needed. Finally, the new profiling system allows the user to examine aspects of the run-time call structure of the program. This is useful in the analysis of the run-time behaviour of the program. Previous attempts at extending the results produced by a profiler in such a way have failed due to the exceptionally high overheads. Exploration of the overheads produced by the new profiling scheme show that typical overheads in profiling the LOLITA system are: a 10% increase in compilation time; a 7% increase in executable size and a 70% run-time overhead. These overheads mean a considerable saving in time in the detailed analysis of profiling a large, lazy functional program
    • …
    corecore