10 research outputs found

    Building Fault Tollrence within Clouds at Network Level

    Get PDF
    Cloud computing technologies and infrastructure facilities are coming up in a big way making it cost effective for the users to implement their IT based solutions to run business in most cost-effective and economical way. Many intricate issues however, have cropped-up which must be addressed to be able to use clouds the purpose for which they are designed and implemented. Among all, fault tolerance and securing the data stored on the clouds takes most of the importance. Continuous availability of the services is dependent on many factors. Faults bound to happen within a network, software, and platform or within the infrastructure which are all used for establishing the cloud. The network that connects various servers, devices, peripherals etc., have to be fault tolerant to start-with so that intended and un-interrupted services to the user can be made available. A novel network design method that leads to achieve high availability of the network and thereby the cloud itself has been presented in this pape

    High performance checksum computation for fault-tolerant MPI over InfiniBand

    Get PDF
    International audienceWith the increase of the number of nodes in clusters, the probability of failures and unusual events increases. In this paper, we present checksum mechanisms to detect data corruption. We study the impact of checksums on network communication performance and we propose a mechanism to amortize their cost on InfiniBand. We have implemented our mechanisms in the NEWMADELEINE communication library. Our evaluation shows that our mechanisms to ensure message integrity do not impact noticeably the application performance, which is an improvement over the state of the art MPI implementations

    Evaluation et amélioration des performances d'une implémentation MPI pour la grille

    Get PDF
    Les applications parallèles utilisent généralement le standard MPI pour réaliser leurs communications. La plupart des implémentations de MPI sont destinées aux grappes homogènes. Avec l'apparition des grilles de calcul, il est nécessaire de faire évoluer ces implémentations pour les adapter efficacement aux contraintes de ces nouvelles plateformes que sont la gestion de l'hétérogénéité et la prise en compte des liens réseau longue distance permettant l'interconnexion des sites de la grille. Aucune implémentation actuelle ne prend en compte efficacement ces deux paramètres. Après une étude des implémentations existantes, cet article analyse le comportement sur la grille de l'une d'entre elles, MPICH-Madeleine, qui propose une gestion efficace de l'hétérogénéité des réseaux rapides de grappe. A partir de nos premières expérimentations, nous proposons des optimisations permettant d'améliorer les performances d'exécution sur la grille. Elles nous ont permis d'augmenter très sensiblement la bande passante lors de l'exécution d'un ping-pong MPI : en passant de 95Mb/s à 600Mb/s sur la longue distance. Les expérimentations ont été réalisées sur la grille française Grid'5000

    Optimierte Implementierung ausgewählter kollektiver Operationen unter Ausnutzung der Hardwareparallelität des InfiniBand Netzwerkes

    Get PDF
    Ziel der Arbet ist eine optimierte Implementierung der im MPI-1 Standard definierten Reduktionsoperationen MPI_Reduce(), MPI_Allreduce(), MPI_Scan(), MPI_Reduce_scatter() für das InfiniBand Netzwerk. Hierbei soll besonderer Wert auf spezielle InfiniBand Operationen und die Hardwareparallelität gelegt werden. InfiniBand ermöglicht es Kommunikationsoperationen klar von Berechnungen zu trennen, was eine Überlappung beider Operationstypen in der Reduktion ermöglicht. Das Potential dieser Methode soll modelltheoretisch als auch praktisch in einer prototypischen Implementierung im Rahmen des Open MPI Frameworks erfolgen. Das Endresultat soll mit vorhandenen Implementierungen (z.B. MVAPICH) verglichen werden.The performance of collective communication operations is one of the deciding factors in the overall performance of a MPI application. Current implementations of MPI use the point-to-point components to access the InfiniBand network. Therefore it is tried to improve the performance of a collective component by accessing the InfiniBand network directly. This should avoid overhead and make it possible to tune the algorithms to this specific network. Various algorithms for the MPI_Reduce, MPI_Allreduce, MPI_Scan and MPI_Reduce_scatter operations are presented. The theoretical performance of the algorithms is analyzed with the LogfP and LogGP models. Selected algorithms are implemented as part of an Open MPI collective component. Finally the performance of different algorithms and different MPI implementations is compared

    Evaluation et amélioration des performances d'une implémentation MPI pour la grille

    Get PDF
    Les applications parallèles utilisent généralement le standard MPI pour réaliser leurs communications. La plupart des implémentations de MPI sont destinées aux grappes homogènes. Avec l'apparition des grilles de calcul, il est nécessaire de faire évoluer ces implémentations pour les adapter efficacement aux contraintes de ces nouvelles plateformes que sont la gestion de l'hétérogénéité et la prise en compte des liens réseau longue distance permettant l'interconnexion des sites de la grille. Aucune implémentation actuelle ne prend en compte efficacement ces deux paramètres. Après une étude des implémentations existantes, cet article analyse le comportement sur la grille de l'une d'entre elles, MPICH-Madeleine, qui propose une gestion efficace de l'hétérogénéité des réseaux rapides de grappe. A partir de nos premières expérimentations, nous proposons des optimisations permettant d'améliorer les performances d'exécution sur la grille. Elles nous ont permis d'augmenter très sensiblement la bande passante lors de l'exécution d'un ping-pong MPI : en passant de 95Mb/s à 600Mb/s sur la longue distance. Les expérimentations ont été réalisées sur la grille française Grid'5000

    RADIC : un middleware de tolerancia a fallos que preserva el rendimiento

    Get PDF
    La tolerancia a fallos es una línea de investigación que ha adquirido una importancia relevante con el aumento de la capacidad de cómputo de los súper-computadores actuales. Esto es debido a que con el aumento del poder de procesamiento viene un aumento en la cantidad de componentes que trae consigo una mayor cantidad de fallos. Las estrategias de tolerancia a fallos actuales en su mayoría son centralizadas y estas no escalan cuando se utiliza una gran cantidad de procesos, dado que se requiere sincronización entre todos ellos para realizar las tareas de tolerancia a fallos. Además la necesidad de mantener las prestaciones en programas paralelos es crucial, tanto en presencia como en ausencia de fallos. Teniendo en cuenta lo citado, este trabajo se ha centrado en una arquitectura tolerante a fallos descentralizada (RADIC - Redundant Array of Distributed and Independant Controllers) que busca mantener las prestaciones iniciales y garantizar la menor sobrecarga posible para reconfigurar el sistema en caso de fallos. La implementación de esta arquitectura se ha llevado a cabo en la librería de paso de mensajes denominada Open MPI, la misma es actualmente una de las más utilizadas en el mundo científico para la ejecución de programas paralelos que utilizan una plataforma de paso de mensajes. Las pruebas iniciales demuestran que el sistema introduce mínima sobrecarga para llevar a cabo las tareas correspondientes a la tolerancia a fallos. MPI es un estándar por defecto fail-stop, y en determinadas implementaciones que añaden cierto nivel de tolerancia, las estrategias más utilizadas son coordinadas. En RADIC cuando ocurre un fallo el proceso se recupera en otro nodo volviendo a un estado anterior que ha sido almacenado previamente mediante la utilización de checkpoints no coordinados y la relectura de mensajes desde el log de eventos. Durante la recuperación, las comunicaciones con el proceso en cuestión deben ser retrasadas y redirigidas hacia la nueva ubicación del proceso. Restaurar procesos en un lugar donde ya existen procesos sobrecarga la ejecución disminuyendo las prestaciones, por lo cual en este trabajo se propone la utilización de nodos spare para la recuperar en ellos a los procesos que fallan, evitando de esta forma la sobrecarga en nodos que ya tienen trabajo. En este trabajo se muestra un diseño propuesto para gestionar de un modo automático y descentralizado la recuperación en nodos spare en un entorno Open MPI y se presenta un análisis del impacto en las prestaciones que tiene este diseño. Resultados iniciales muestran una degradación significativa cuando a lo largo de la ejecución ocurren varios fallos y no se utilizan spares y sin embargo utilizándolos se restablece la configuración inicial y se mantienen las prestaciones.Fault tolerance is a research line that has gained significant importance with the increasing of the computing power of today's super-computers. The increasing of processing power comes with an increase in the number of components that brings also an increase in the number of failures. Today's fault tolerance strategies are mostly centralized and these do not scale when using a large number of processes, since synchronization is required between them to perform the fault tolerance tasks. Maintain performance in parallel applications is crucial, in the presence or absence of fault. According to the above, this work has focused on a decentralized fault-tolerant architecture (RADIC - Redundant Array of Distributed and Independant Controllers) that seeks to maintain the initial performance and ensure the lowest possible overhead to reconfigure the system in case of failure. The implementation of this architecture has been made in the message passing library called Open MPI. This is one of the most used message passing library in the scientific world to execute parallel programs. Initial tests show that the system introduces minimal overhead to perform fault tolerances tasks, and also show that performance is restored as it was before failure. MPI is a fail-stop standard and some implementations that add fault tolerances use a coordinated strategy. In RADIC when a failure occur the failed process recovers in another node rolling back to a previous saved state made by using an uncoordinated strategy of checkpoint and by reprocessing the saved log. During restart, the communications with the failed process should be delayed and redirected to the new process location. Restoring processes in a place where processes already exists overload the application and the performance decrease. In this work is proposed the inclusion of spare nodes to restore failed processes in them, avoiding performance degradation. In this work we propose an automatic and decentralized method to manage the recovery of failed processes in spare nodes in an Open MPI environment and is also presented an analysis of the failure impact in the performance. Experimental evaluation shows a significant degradation when failures occur along a parallel execution and there is no spare nodes, nevertheless by using spares, the initial configuration and the initial performance may be restored

    One-Sided Communication for High Performance Computing Applications

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009Parallel programming presents a number of critical challenges to application developers. Traditionally, message passing, in which a process explicitly sends data and another explicitly receives the data, has been used to program parallel applications. With the recent growth in multi-core processors, the level of parallelism necessary for next generation machines is cause for concern in the message passing community. The one-sided programming paradigm, in which only one of the two processes involved in communication actively participates in message transfer, has seen increased interest as a potential replacement for message passing. One-sided communication does not carry the heavy per-message overhead associated with modern message passing libraries. The paradigm offers lower synchronization costs and advanced data manipulation techniques such as remote atomic arithmetic and synchronization operations. These combine to present an appealing interface for applications with random communication patterns, which traditionally present message passing implementations with difficulties. This thesis presents a taxonomy of both the one-sided paradigm and of applications which are ideal for the one-sided interface. Three case studies, based on real-world applications, are used to motivate both taxonomies and verify the applicability of the MPI one-sided communication and Cray SHMEM one-sided interfaces to real-world problems. While our results show a number of short-comings with existing implementations, they also suggest that a number of applications could benefit from the one-sided paradigm. Finally, an implementation of the MPI one-sided interface within Open MPI is presented, which provides a number of unique performance features necessary for efficient use of the one-sided programming paradigm

    FT-PAS-A framework for pattern specific fault-tolerance in parallel programming

    Get PDF
    Fault-tolerance is an important requirement for long running parallel applications. Many approaches are discussed in various literatures about providing fault-tolerance for parallel systems. Most of them exhibit one or more of these shortcomings in delivering fault-tolerance: non-specific solution (i.e., the fault-tolerance solution is general), no separation-of-concern (i.e., the application developer's involvement in implementing the fault tolerance is significant) and limited to inbuilt fault-tolerance solution. In this thesis, we propose a different approach to deliver fault-tolerance to the parallel programs using a-priori knowledge about their patterns. Our approach is based on the observation that different patterns require different fault-tolerance techniques (specificity). Consequently, we have contributed by classifying patterns into sub-patterns based on fault-tolerance strategies. Moreover, the core functionalities of these fault-tolerance strategies can be abstracted and pre-implemented generically, independent of a specific application. Thus, the pre-packaged solution separates their implementation details from the application developer (separation-of-concern). One such fault-tolerance model is designed and implemented here to demonstrate our idea. The Fault-Tolerant Parallel Architectural Skeleton (FT-PAS) model implements various fault-tolerance protocols targeted for a collection of (frequently used) patterns in parallel-programming. Fault-tolerance protocol extension is another important contribution of this research. The FT-PAS model provides a set of basic building blocks as part of protocol extension in order to build new fault- tolerance protocols as needed for available patterns. Finally, the usages of the model from the perspective of two user categories (i.e., an application developer and a protocol designer) are illustrated through examples

    Fault Tolerance for High-Performance Applications Using Structured Parallelism Models

    Get PDF
    In the last years parallel computing has increasingly exploited the high-level models of structured parallel programming, an example of which are algorithmic skeletons. This trend has been motivated by the properties featuring structured parallelism models, which can be used to derive several (static and dynamic) optimizations at various implementation levels. In this thesis we study the properties of structured parallel models useful for attacking the issue of providing a fault tolerance support oriented towards High-Performance applications. This issue has been traditionally faced in two ways: (i) in the context of unstructured parallelism models (e.g. MPI), which computation model is essentially based on a distributed set of processes communicating through message-passing, with an approach based on checkpointing and rollback recovery or software replication; (ii) in the context of high-level models, based on a specific parallelism model (e.g. data-flow) and/or an implementation model (e.g. master-slave), by introducing specific techniques based on the properties of the programming and computation models themselves. In this thesis we make a step towards a more abstract viewpoint and we highlight the properties of structured parallel models interesting for fault tolerance purposes. We consider two classes of parallel programs (namely task parallel and data parallel) and we introduce a fault tolerance support based on checkpointing and rollback recovery. The support is derived according to the high-level properties of the parallel models: we call this derivation specialization of fault tolerance techniques, highlighting the difference with classical solutions supporting structure-unaware computations. As a consequence of this specialization, the introduced fault tolerance techniques can be configured and optimized to meet specific needs at different implementation levels. That is, the supports we present do not target a single computing platform or a specific class of them. Indeed the specializations are the mechanism to target specific issues of the exploited environment and of the implemented applications, as proper choices of the protocols and their configurations
    corecore