676 research outputs found

    High Performance Java Remote Method Invocation for Parallel Computing on Clusters

    Get PDF
    This is a post-peer-review, pre-copyedit version. The final authenticated version is available online at: http://dx.doi.org/10.1109/ISCC.2007.4381536[Abstract] This paper presents a more efficient Java remote method invocation (RMI) implementation for high-speed clusters. The use of Java for parallel programming on clusters is limited by the lack of efficient communication middleware and high-speed cluster interconnect support. This implementation overcomes these limitations through a more efficient Java RMI protocol based on several basic assumptions on clusters. Moreover, the use of a high performance sockets library provides with direct high-speed interconnect support. The performance evaluation of this middleware on a gigabit Ethernet (GbE) and a scalable coherent interface (SCI) cluster shows experimental evidence of throughput increase. Moreover, qualitative aspects of the solution such as transparency to the user, interoperability with other systems and no need of source code modification can augment the performance of existing parallel Java codes and boost the development of new high performance Java RMI applications.Ministerio de Education y Ciencia; TIN2004-07797-C02Xunta de Galicia; PGIDIT06PXIB105228PR

    Java Grande Forum Report: Making Java Work for High-End Computing

    Get PDF
    This document describes the Java Grande Forum and includes its initial deliverables.Theseare reports that convey a succinct set of recommendations from this forum to SunMicrosystems and other purveyors of Java™ technology that will enable GrandeApplications to be developed with the Java programming language

    A Comparative Evaluation of .net Remoting and JAVA RMI

    Get PDF
    Distributed application technologies such as Micrososoft.NET Remoting, and Java Remote Method Invocation (RMI) have evolved over many years to keep up with the constantly increasing requirements of the enterprise. In the broadest sense, a distributed application is one in which the application processing is divided among two or more machines. Distributed middleware technologies have made significant progress over the last decade. Although Remoting and RMI are the two of most popular contemporary middleware technologies, little literature exists that compares them. In this paper, we study the issues involved in designing a distributed system using Java RMI and Microsoft.NET Remoting. In order to perform the comparisons, we designed a distributed distance learning application in both technologies. In this paper, we show both similarities and differences between these two competing technologies. Remoting and RMI both have similar serialization process and let objects serialization to be customized according to the needs. They both provide support to be able to connect to interface definition language such as Common Object Request Broker Architecture (CORBA). They both contain distributed garbage collection support. Our research shows that programs coded using Remoting execute faster than programs coded using RMI. They both have strong support for security although implemented in different ways. In addition, RMI also has additional security mechanisms provided via security policy files. RMI requires a naming service to be able to locate the server address and connection port. This is a big advantage since the clients do not need to know the server location or port number, RMI registry locates it automatically. On the other hand, Remoting does not require a naming service; it requires that the port to connect must be pre-specified and all services must be well-known. RMI applications can be run on any operating system whereas Remoting targets Windows as the primary platform. We found it was easier to design the distance learning application in Remoting than in RMI. Remoting also provides greater flexibility in regard to configuration by providing support for external configuration files. In conclusion, we recommend that before deciding which application to choose careful considerations should be given to the type of application, platform, and resources available to program the application

    F-MPJ: scalable Java message-passing communications on parallel systems

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in The Journal of Supercomputing. The final authenticated version is available online at: https://doi.org/10.1007/s11227-009-0270-0[Abstract] This paper presents F-MPJ (Fast MPJ), a scalable and efficient Message-Passing in Java (MPJ) communication middleware for parallel computing. The increasing interest in Java as the programming language of the multi-core era demands scalable performance on hybrid architectures (with both shared and distributed memory spaces). However, current Java communication middleware lacks efficient communication support. F-MPJ boosts this situation by: (1) providing efficient non-blocking communication, which allows communication overlapping and thus scalable performance; (2) taking advantage of shared memory systems and high-performance networks through the use of our high-performance Java sockets implementation (named JFS, Java Fast Sockets); (3) avoiding the use of communication buffers; and (4) optimizing MPJ collective primitives. Thus, F-MPJ significantly improves the scalability of current MPJ implementations. A performance evaluation on an InfiniBand multi-core cluster has shown that F-MPJ communication primitives outperform representative MPJ libraries up to 60 times. Furthermore, the use of F-MPJ in communication-intensive MPJ codes has increased their performance up to seven times.Ministerio de Educación y Ciencia; TIN2004-07797-C02Ministerio de Educación y Ciencia; TIN2007-67537-C03-2Xunta de Galicia; PGIDIT06PXIB105228P

    RAFDA: A Policy-Aware Middleware Supporting the Flexible Separation of Application Logic from Distribution

    Get PDF
    Middleware technologies often limit the way in which object classes may be used in distributed applications due to the fixed distribution policies that they impose. These policies permeate applications developed using existing middleware systems and force an unnatural encoding of application level semantics. For example, the application programmer has no direct control over inter-address-space parameter passing semantics. Semantics are fixed by the distribution topology of the application, which is dictated early in the design cycle. This creates applications that are brittle with respect to changes in distribution. This paper explores technology that provides control over the extent to which inter-address-space communication is exposed to programmers, in order to aid the creation, maintenance and evolution of distributed applications. The described system permits arbitrary objects in an application to be dynamically exposed for remote access, allowing applications to be written without concern for distribution. Programmers can conceal or expose the distributed nature of applications as required, permitting object placement and distribution boundaries to be decided late in the design cycle and even dynamically. Inter-address-space parameter passing semantics may also be decided independently of object implementation and at varying times in the design cycle, again possibly as late as run-time. Furthermore, transmission policy may be defined on a per-class, per-method or per-parameter basis, maximizing plasticity. This flexibility is of utility in the development of new distributed applications, and the creation of management and monitoring infrastructures for existing applications.Comment: Submitted to EuroSys 200

    JMPI: implementing the message passing standard in Java

    Full text link
    The Message Passing Interface (MPI) standard provides a uniform Application Programmers Interface (API) that abstracts the underlying hardware from the parallel applications. Recent research efforts have extended the MPI standard to Java either through wrapper implementations or as subsets of larger parallel infrastructures. In this paper, we describe JMPI, a reference implementation of MPI developed at the Architecture and Real-Time Laboratory at the University of Massachusetts Amherst. In this implementation, we explore using Java's Remote Method Invocation (RMI), Object Serialization and Introspection technologies for message passing. We discuss the architecture of our implementation, adherence to emerging standards for MPI bindings for Java, our performance results, and future research direction
    corecore