48,266 research outputs found

    Parallel Java: A Unified API for Shared Memory and Cluster Parallel Programming in 100% Java

    Get PDF
    Parallel Java is a parallel programming API whose goals are (1) to support both shared memory (thread-based) parallel programming and cluster (message-based) parallel programming in a single unified API, allowing one to write parallel programs combining both paradigms; (2) to provide the same capabilities as OpenMP and MPI in an object oriented, 100% Java API; and (3) to be easily deployed and run in a heterogeneous computing environment of single-core CPUs, multi-core CPUs, and clusters thereof. This paper describes Parallel Java’s features and architecture; compares and contrasts Parallel Java to other Java based parallel middleware libraries; and reports performance measurements of Parallel Java programs

    Parallel Computing in Java

    Get PDF
    The Java programming language and environment is inspiring new research activities in many areas of computing, of which parallel computing is one of the major interests. Parallel techniques are themselves finding new uses in cluster computing systems. Although there are excellent software tools for scheduling, monitoring and message-based programming on parallel clusters, these systems are not yet well integrated and do not provide very high-level parallel programming support. This research presents a number of issues which are considered to be key to the suitability of Java for HPC (High Performance Computing) applications and then explore the support for concurrency in the current Java 1.8 specification. We further present various relatively recent parallel Java models which support HPC for both shared and distributed memory programming paradigms. Finally, we attempt to evaluate the performance of discussed Java HPC models by comparing the same with the relative traditional native C implementations, where appropriate. The analysis of the results suggest that Java can achieve near similar performance to natively compiled languages, both for sequential and parallel applications, thus making it a viable alternative for HPC programming

    MAGDA: A Mobile Agent based Grid Architecture

    Get PDF
    Mobile agents mean both a technology and a programming paradigm. They allow for a flexible approach which can alleviate a number of issues present in distributed and Grid-based systems, by means of features such as migration, cloning, messaging and other provided mechanisms. In this paper we describe an architecture (MAGDA – Mobile Agent based Grid Architecture) we have designed and we are currently developing to support programming and execution of mobile agent based application upon Grid systems

    Safe and Verifiable Design of Concurrent Java Programs

    Get PDF
    The design of concurrent programs has a reputation for being difficult, and thus potentially dangerous in safetycritical real-time and embedded systems. The recent appearance of Java, whilst cleaning up many insecure aspects of OO programming endemic in C++, suffers from a deceptively simple threads model that is an insecure variant of ideas that are over 25 years old [1]. Consequently, we cannot directly exploit a range of new CASE tools -- based upon modern developments in parallel computing theory -- that can verify and check the design of concurrent systems for a variety of dangers\ud such as deadlock and livelock that otherwise plague us during testing and maintenance and, more seriously, cause catastrophic failure in service. \ud Our approach uses recently developed Java class\ud libraries based on Hoare's Communicating Sequential Processes (CSP); the use of CSP greatly simplifies the design of concurrent systems and, in many cases, a parallel approach often significantly simplifies systems originally approached sequentially. New CSP CASE tools permit designs to be verified against formal specifications\ud and checked for deadlock and livelock. Below we introduce CSP and its implementation in Java and develop a small concurrent application. The formal CSP description of the application is provided, as well as that of an equivalent sequential version. FDR is used to verify the correctness of both implementations, their\ud equivalence, and their freedom from deadlock and livelock

    Probabilistic Graphical Models on Multi-Core CPUs using Java 8

    Get PDF
    In this paper, we discuss software design issues related to the development of parallel computational intelligence algorithms on multi-core CPUs, using the new Java 8 functional programming features. In particular, we focus on probabilistic graphical models (PGMs) and present the parallelisation of a collection of algorithms that deal with inference and learning of PGMs from data. Namely, maximum likelihood estimation, importance sampling, and greedy search for solving combinatorial optimisation problems. Through these concrete examples, we tackle the problem of defining efficient data structures for PGMs and parallel processing of same-size batches of data sets using Java 8 features. We also provide straightforward techniques to code parallel algorithms that seamlessly exploit multi-core processors. The experimental analysis, carried out using our open source AMIDST (Analysis of MassIve Data STreams) Java toolbox, shows the merits of the proposed solutions.Comment: Pre-print version of the paper presented in the special issue on Computational Intelligence Software at IEEE Computational Intelligence Magazine journa

    The role of concurrency in an evolutionary view of programming abstractions

    Full text link
    In this paper we examine how concurrency has been embodied in mainstream programming languages. In particular, we rely on the evolutionary talking borrowed from biology to discuss major historical landmarks and crucial concepts that shaped the development of programming languages. We examine the general development process, occasionally deepening into some language, trying to uncover evolutionary lineages related to specific programming traits. We mainly focus on concurrency, discussing the different abstraction levels involved in present-day concurrent programming and emphasizing the fact that they correspond to different levels of explanation. We then comment on the role of theoretical research on the quest for suitable programming abstractions, recalling the importance of changing the working framework and the way of looking every so often. This paper is not meant to be a survey of modern mainstream programming languages: it would be very incomplete in that sense. It aims instead at pointing out a number of remarks and connect them under an evolutionary perspective, in order to grasp a unifying, but not simplistic, view of the programming languages development process
    • …
    corecore