13 research outputs found

    RAFDA: A Policy-Aware Middleware Supporting the Flexible Separation of Application Logic from Distribution

    Get PDF
    Middleware technologies often limit the way in which object classes may be used in distributed applications due to the fixed distribution policies that they impose. These policies permeate applications developed using existing middleware systems and force an unnatural encoding of application level semantics. For example, the application programmer has no direct control over inter-address-space parameter passing semantics. Semantics are fixed by the distribution topology of the application, which is dictated early in the design cycle. This creates applications that are brittle with respect to changes in distribution. This paper explores technology that provides control over the extent to which inter-address-space communication is exposed to programmers, in order to aid the creation, maintenance and evolution of distributed applications. The described system permits arbitrary objects in an application to be dynamically exposed for remote access, allowing applications to be written without concern for distribution. Programmers can conceal or expose the distributed nature of applications as required, permitting object placement and distribution boundaries to be decided late in the design cycle and even dynamically. Inter-address-space parameter passing semantics may also be decided independently of object implementation and at varying times in the design cycle, again possibly as late as run-time. Furthermore, transmission policy may be defined on a per-class, per-method or per-parameter basis, maximizing plasticity. This flexibility is of utility in the development of new distributed applications, and the creation of management and monitoring infrastructures for existing applications.Comment: Submitted to EuroSys 200

    A Peer-to-Peer Middleware Framework for Resilient Persistent Programming

    Get PDF
    The persistent programming systems of the 1980s offered a programming model that integrated computation and long-term storage. In these systems, reliable applications could be engineered without requiring the programmer to write translation code to manage the transfer of data to and from non-volatile storage. More importantly, it simplified the programmer's conceptual model of an application, and avoided the many coherency problems that result from multiple cached copies of the same information. Although technically innovative, persistent languages were not widely adopted, perhaps due in part to their closed-world model. Each persistent store was located on a single host, and there were no flexible mechanisms for communication or transfer of data between separate stores. Here we re-open the work on persistence and combine it with modern peer-to-peer techniques in order to provide support for orthogonal persistence in resilient and potentially long-running distributed applications. Our vision is of an infrastructure within which an application can be developed and distributed with minimal modification, whereupon the application becomes resilient to certain failure modes. If a node, or the connection to it, fails during execution of the application, the objects are re-instantiated from distributed replicas, without their reference holders being aware of the failure. Furthermore, we believe that this can be achieved within a spectrum of application programmer intervention, ranging from minimal to totally prescriptive, as desired. The same mechanisms encompass an orthogonally persistent programming model. We outline our approach to implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200

    A semi-automatic parallelization tool for Java based on fork-join synchronization patterns

    Get PDF
    Because of the increasing availability of multi-core machines, clusters, Grids, and combinations of these environments, there is now plenty of computational power available for executing compute intensive applications. However, because of the overwhelming and rapid advances in distributed and parallel hardware and environments, today?s programmers are not fully prepared to exploit distribution and parallelism. In this sense, the Java language has helped in handling the heterogeneity of such environments, but there is a lack of facilities and tools to easily distributing and parallelizing applications. One solution to mitigate this problem and make some progress towards producing general tools seems to be the synthesis of semi-automatic parallelism and Parallelism as a Concern (PaaC), which allows parallelizing applications along with as little modifications on sequential codes as possible. In this paper, we discuss a new approach that aims at overcoming the drawbacks of current Java-based parallel and distributed development tools, which precisely exploit these new conceptsFil: Hirsch, Matias. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Zunino, Alejandro. Consejo Nacional de Invest.cientif.y Tecnicas. Ctro Cientifico Tecnologico Conicet - Tandil. Instituto Superior de Ingenieria del Software;Fil: Mateos Diaz, Cristian Maximiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software

    A semi-automatic parallelization tool for Java based on fork-join synchronization patterns

    Get PDF
    Because of the increasing availability of multi-core machines, clusters, Grids, and combinations of these environments, there is now plenty of computational power available for executing compute intensive applications. However, because of the overwhelming and rapid advances in distributed and parallel hardware and environments, today’s programmers are not fully prepared to exploit distribution and parallelism. In this sense, the Java language has helped in handling the heterogeneity of such environments, but there is a lack of facilities and tools to easily distributing and parallelizing applications. One solution to mitigate this problem and make some progress towards producing general tools seems to be the synthesis of semi-automatic parallelism and Parallelism as a Concern (PaaC), which allows parallelizing applications along with as little modifications on sequential codes as possible. In this paper, we discuss a new approach that aims at overcoming the drawbacks of current Java-based parallel and distributed development tools, which precisely exploit these new concepts.Sociedad Argentina de Informática e Investigación Operativ

    Implementação paralela do algoritmo Barnes-Hut para simulação do problema N-Corpos usando um número arbitrário de GPU’s

    Get PDF
    O problema N-corpos é o problema referente à previsão do comportamento de corpos individuais dentro de um sistema dinâmico onde todos os corpos interagem entre si. Ele aparece em diferentes áreas de estudo, desde a simulação das interações entre corpos de gigantesca escala, como corpos celestes (planetas, estrelas, galáxias), até escalas microscópicas como pequenas partículas. Trata-se de um problema que normalmente requer grande esforço computacional para ser resolvido e, devido a isso, diversos algoritmos foram desenvolvidos ao longo dos anos para reduzir o tempo de processamento necessário. Entre eles está o algoritmo de Barnes-Hut, que realiza aproximações dos corpos que estão sendo calculados de modo a agrupá-los e minimizar os cálculos realizados. Nos últimos anos, com a disponibilização das unidades de processamento gráfico (GPUs) para execução paralela de algoritmos de propósito geral, várias soluções de paralelização do problema de N-Corpos foram propostas para essa nova plataforma. Neste trabalho, estendemos uma implementação paralela do algoritmo de Barnes-Hut para GPU, permitindo o uso de um número arbitrário de GPU’s. Avaliamos o comportamento do programa a cada GPU que adicionamos, testando com até 4 GPU’s simultaneamente. Observamos que os resultados positivos obtidos estão de acordo com nossas expectativas, onde identificamos que a aceleração inicialmente se aproxima do máximo teórico, porém, à medida que se aumenta o número de GPU’s, o custo de gerenciamento cresce, reduzindo o ganho de aceleração

    EasyFJP: Providing Hybrid Parallelism as a Concern for Divide and Conquer Java Applications

    Get PDF
    Because of the increasing availability of multi-core machines, clus- ters, Grids, and combinations of these there is now plenty of computational power,but today's programmers are not fully prepared to exploit parallelism. In particular, Java has helped in handling the heterogeneity of such environments. However, there is a lot of ground to cover regarding facilities to easily and elegantly parallelizing applications. One path to this end seems to be the synthesis of semi- automatic parallelism and Parallelism as a Concern (PaaC). The former allows users to be mostly unaware of parallel exploitation problems and at the same time manually optimize parallelized applications whenever necessary, while the latter allows applications to be separated from parallel-related code. In this paper, we present EasyFJP, an approach that implicitly exploits parallelism in Java applications based on the concept of fork-join synchronization pattern, a simple but effective abstraction for creating and coordinating parallel tasks. In addition, EasyFJP lets users to explicitly optimize applications through policies, or user-provided rules to dynamically regulate task granularity. Finally, EasyFJP relies on PaaC by means of source code generation techniques to wire applications and parallel-specific code together. Experiments with real-world applications on an emulated Grid and a cluster evidence that EasyFJP delivers competitive performance compared to state-of-the-art Java parallel programming tools.Fil: Mateos Diaz, Cristian Maximiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Hirsch Jofré, Matías Eberardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina

    Distributed code generation using object level analysis

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Plattformabhängige Umgebung für verteilt paralleles Rechnen mit Rechnerbündeln

    Get PDF

    Transparent and adaptive application partitioning using mobile objects

    Get PDF
    The dynamic nature and heterogeneity of modern execution environments such as mobile, ubiquitous, and grid computing, present major challenges for the development and efficient execution of the applications targeted for these environments. In particular, applications tailored to run in a specific environment will show different and most likely sub-optimal behaviour when executed on a different and/or dynamic environment. Consequently, there has been growing interests in the area of application adaptation which aims to enable applications to cope with the varying execution environments. Adaptive application partitioning, a specific form of non-functional adaptation involving distribution of mobile objects across multiple host machines, is of particular interest to this thesis due to the diversity of its uses. In this approach, certain runtime information (known as context) is used to allow an object-oriented application to adaptively (re)adjust the placement of its objects during its execution, for purposes such as improving application performance and reliability as well as balancing resource utilisation across machines. Promoting the adoption of such adaptation requires a process that requires minimal human involvement in both the execution and the development of the relevant application. These challenges establish the main goals and contributions of this work, which include: 1) Proposing an effective application partitioning solution via the adoption of a decentralised adaptation strategy known as local adaptation. 2) Enabling adaptive application partitioning which does not require human intervention, through automatic collection of required information/context. 3) Proposing a solution for transparently injecting the required adaptation functionality into regular object-oriented applications allowing significant reduction of the associated development cost/effort. The proposed solutions have been implemented in a Java-based adaptation framework called MobJeX. This implementation, which was used as a test bed for the empirical experiments undertaken in this study, can be used to facilitate future research relevant to this particular study
    corecore