37 research outputs found

    A Low–Level Communication Library for Java HPC

    Full text link

    Java for parallel computing and as a general language for scientific and engineering simulation and modeling

    Get PDF
    We discuss the role of Java and Web technologies for general simulation. We classify the classes of concurrency typical in problems and analyze separately the role of Java in user interfaces, coarse grain software integration, and detailed computational kernels. We conclude that Java could become a major language for computational science, as it potentially offers good performance, excellent user interfaces, and the advantages of object-oriented structure

    Suporte à computação paralela e distribuída em Java: API e comunicação entre nós cooperantes

    Get PDF
    Este trabalho é uma parte do tema global “Suporte à Computação Paralela e Distribuída em Java”, também tema da tese de Daniel Barciela no mestrado de Engenharia Informática do Instituto Superior de Engenharia do Porto. O seu objetivo principal consiste na definição/criação da interface com o programador, assim como também abrange a forma como os nós comunicam e cooperam entre si para a execução de determinadas tarefas, de modo a atingirem um único objetivo global. No âmbito desta dissertação foi realizado um estudo prévio relativamente aos modelos teóricos referentes à computação paralela, assim como também foram analisadas linguagens e frameworks que fornecem suporte a este mesmo tipo de computação. Este estudo teve como principal objetivo a análise da forma como estes modelos e linguagens permitem ao programador expressar o processamento paralelo no desenvolvimento das aplicações. Como resultado desta dissertação surgiu a framework denominada Distributed Parallel Framework for Java (DPF4j), cujo objetivo principal é fornecer aos programadores o suporte para o desenvolvimento de aplicações paralelas e distribuídas. Esta framework foi desenvolvida na linguagem Java. Esta dissertação contempla a parte referente à interface de programação e a toda a comunicação entre nós cooperantes da framework DPF4j. Por fim, foi demonstrado através dos testes realizados que a DPF4j, apesar de ser ainda um protótipo, já demonstra ter uma performance superior a outras frameworks e linguagens que possuem os mesmos objetivos.The present thesis is part of the main theme “Parallel and Distributed Computing Support for Java”. Its main goals are the definition and creation of an API for the framework, and the comprehension of the way nodes communicate and cooperate with each other in order to perform certain tasks to achieve a common goal. In the scope of this thesis, a previous study was conducted about the theoretical models and frameworks that target the parallel computation domain. This study focused on the analysis of how these models and languages allow programmers to express parallelism in the development of applications. As a result of this thesis a new framework was implemented, named Distributed Parallel Framework for Java (DPF4j), which main goal is to provide support to programmers in the development of parallel and distributed applications. The framework was developed using the Java programming language. This thesis is focused on the Application Programming Interface (API) and the communication process between all nodes that use the framework. Finally, it was demonstrated that the DPF4j framework, although it is only a prototype, it already presents a good performance, judjing by the results obtained in the tests phase

    Data parallelism in Java

    Get PDF

    Suporte à computação paralela e distribuída em Java: distribuição e execução de trabalho

    Get PDF
    Nos últimos anos começaram a ser vulgares os computadores dotados de multiprocessadores e multi-cores. De modo a aproveitar eficientemente as novas características desse hardware começaram a surgir ferramentas para facilitar o desenvolvimento de software paralelo, através de linguagens e frameworks, adaptadas a diferentes linguagens. Com a grande difusão de redes de alta velocidade, tal como Gigabit Ethernet e a última geração de redes Wi-Fi, abre-se a oportunidade de, além de paralelizar o processamento entre processadores e cores, poder em simultâneo paralelizá-lo entre máquinas diferentes. Ao modelo que permite paralelizar processamento localmente e em simultâneo distribuí-lo para máquinas que também têm capacidade de o paralelizar, chamou-se “modelo paralelo distribuído”. Nesta dissertação foram analisadas técnicas e ferramentas utilizadas para fazer programação paralela e o trabalho que está feito dentro da área de programação paralela e distribuída. Tendo estes dois factores em consideração foi proposta uma framework que tenta aplicar a simplicidade da programação paralela ao conceito paralelo distribuído. A proposta baseia-se na disponibilização de uma framework em Java com uma interface de programação simples, de fácil aprendizagem e legibilidade que, de forma transparente, é capaz de paralelizar e distribuir o processamento. Apesar de simples, existiu um esforço para a tornar configurável de forma a adaptar-se ao máximo de situações possível. Nesta dissertação serão exploradas especialmente as questões relativas à execução e distribuição de trabalho, e a forma como o código é enviado de forma automática pela rede, para outros nós cooperantes, evitando assim a instalação manual das aplicações em todos os nós da rede. Para confirmar a validade deste conceito e das ideias defendidas nesta dissertação foi implementada esta framework à qual se chamou DPF4j (Distributed Parallel Framework for JAVA) e foram feitos testes e retiradas métricas para verificar a existência de ganhos de performance em relação às soluções já existentes.In the last years, multiprocessor and multi-core computers have become common in the computer industry. In order to take advantage of these new features, several tools including programming languages and frameworks have appeared to ease the parallelization of the applications’ workload. With the rise of high-speed networks such as Gigabit networks and the latest generation of Wi-Fi, there is now the opportunity, in addition of parallelizing the processing between processors, to be able to simultaneously parallelize workloads between different machines, that is to distribute the workload. The concept of parallel processing considering local execution and simultaneous distribution to other machines that also can parallelize workloads is called "distributed parallel concept". This thesis analyzed the techniques and tools used to make parallel programming, the work that is done within the area of parallel and distributed programming, and proposed a framework that tries to apply the simplicity of the parallel programming to the distributed parallel concept. The proposal is based on providing a Java framework with a simple API that is easy to learn and easy to read, which seamlessly will be able to parallelize and distribute the application workload. Although simple, there was an effort to make it configurable in order to adapt it to the maximum possible situations. This thesis will address issues related to the execution and distribution of work, and the way that the code is automatically transferred between machines without the need of manual installation in the several networked nodes. To validate this concept and the ideas exposed in this thesis, it was implemented a framework which was called DPF4j (Distributed Parallel Framework for JAVA) and tests were made to withdrawn metrics to check for performance gains compared to the existing solutions

    Java on Networks of Workstations (JavaNOW): A Parallel Computing Framework Inspired by Linda and the Message Passing Interface (MPI)

    Get PDF
    Networks of workstations are a dominant force in the distributed computing arena, due primarily to the excellent price/performance ratio of such systems when compared to traditionally massively parallel architectures. It is therefore critical to develop programming languages and environments that can help harness the raw computational power available on these systems. In this article, we present JavaNOW (Java on Networks of Workstations), a Java‐based framework for parallel programming on networks of workstations. It creates a virtual parallel machine similar to the MPI (Message Passing Interface) model, and provides distributed associative shared memory similar to the Linda memory model but with a richer set of primitive operations. JavaNOW provides a simple yet powerful framework for performing computation on networks of workstations. In addition to the Linda memory model, it provides for shared objects, implicit multithreading, implicit synchronization, object dataflow, and collective communications similar to those defined in MPI. JavaNOW is also a component of the Computational Neighborhood, a Java‐enabled suite of services for desktop computational sharing. The intent of JavaNOW is to present an environment for parallel computing that is both expressive and reliable and ultimately can deliver good to excellent performance. As JavaNOW is a work in progress, this article emphasizes the expressive potential of the JavaNOW environment and presents preliminary performance results only

    Collective Asynchronous Remote Invocation (CARI): A High-Level and Effcient Communication API for Irregular Applications

    Get PDF
    The Message Passing Interface (MPI) standard continues to dominate the landscape of parallel computing as the de facto API for writing large-scale scientific applications. But the critics argue that it is a low-level API and harder to practice than shared memory approaches. This paper addresses the issue of programming productivity by proposing a high-level, easy-to-use, and effcient programming API that hides and segregates complex low-level message passing code from the application specific code. Our proposed API is inspired by communication patterns found in Gadget-2, which is an MPI-based parallel production code for cosmological N-body and hydrodynamic simulations. In this paper—we analyze Gadget-2 with a view to understanding what high-level Single Program Multiple Data (SPMD) communication abstractions might be developed to replace the intricate use of MPI in such an irregular application—and do so without compromising the effciency. Our analysis revealed that the use of low-level MPI primitives—bundled with the computation code—makes Gadget-2 diffcult to understand and probably hard to maintain. In addition, we found out that the original Gadget-2 code contains a small handful of—complex and recurring—patterns of message passing. We also noted that these complex patterns can be reorganized into a higherlevel communication library with some modifications to the Gadget-2 code. We present the implementation and evaluation of one such message passing pattern (or schedule) that we term Collective Asynchronous Remote Invocation (CARI). As the name suggests, CARI is a collective variant of Remote Method Invocation (RMI), which is an attractive, high-level, and established paradigm in distributed systems programming. The CARI API might be implemented in several ways—we develop and evaluate two versions of this API on a compute cluster. The performance evaluation reveals that CARI versions of the Gadget-2 code perform as well as the original Gadget-2 code but the level of abstraction is raised considerably
    corecore