19 research outputs found

    SEDAR: Detectando y recuperando fallos transitorios en aplicaciones de HPC

    Get PDF
    El manejo de fallos es una preocupación creciente en HPC; en el futuro, se esperan mayores variedades y tasas de errores, intervalos de detección más largos y fallos silenciosos. Se proyecta que, en sistemas de exa-escala, los errores ocurran varias veces al día y se propaguen para generar desde caídas de procesos hasta corrupciones de resultados debidas a fallos no detectados. En este trabajo se describe la utilización de SEDAR, una herramienta que permite detectar fallos transitorios en aplicaciones MPI, y recuperar automáticamente las ejecuciones, posibilitando su finalización con resultados fiables. La detección se basa en replicación de procesamiento y monitorización del envío de mensajes y del cómputo local, mientras que la recuperación se logra utilizando múltiples checkpoints de capa de sistema. El estudio del comportamiento de SEDAR en presencia de fallos, inyectados en distintos momentos durante la ejecución, permite evaluar su desempeño y caracterizar el overhead asociado a su utilización. Las posibilidades de configurar el modo de uso, adaptándolo a los requerimientos de cobertura y máximo overhead permitido de un sistema particular, hacen de SEDAR una metodología factible y viable para la tolerancia a fallos transitorios en sistemas de HPC.Red de Universidades con Carreras en Informátic

    SEDAR: Detectando y recuperando fallos transitorios en aplicaciones de HPC

    Get PDF
    El manejo de fallos es una preocupación creciente en HPC; en el futuro, se esperan mayores variedades y tasas de errores, intervalos de detección más largos y fallos silenciosos. Se proyecta que, en sistemas de exa-escala, los errores ocurran varias veces al día y se propaguen para generar desde caídas de procesos hasta corrupciones de resultados debidas a fallos no detectados. En este trabajo se describe la utilización de SEDAR, una herramienta que permite detectar fallos transitorios en aplicaciones MPI, y recuperar automáticamente las ejecuciones, posibilitando su finalización con resultados fiables. La detección se basa en replicación de procesamiento y monitorización del envío de mensajes y del cómputo local, mientras que la recuperación se logra utilizando múltiples checkpoints de capa de sistema. El estudio del comportamiento de SEDAR en presencia de fallos, inyectados en distintos momentos durante la ejecución, permite evaluar su desempeño y caracterizar el overhead asociado a su utilización. Las posibilidades de configurar el modo de uso, adaptándolo a los requerimientos de cobertura y máximo overhead permitido de un sistema particular, hacen de SEDAR una metodología factible y viable para la tolerancia a fallos transitorios en sistemas de HPC.Red de Universidades con Carreras en Informátic

    SEDAR: Detectando y recuperando fallos transitorios en aplicaciones de HPC

    Get PDF
    El manejo de fallos es una preocupación creciente en HPC; en el futuro, se esperan mayores variedades y tasas de errores, intervalos de detección más largos y fallos silenciosos. Se proyecta que, en sistemas de exa-escala, los errores ocurran varias veces al día y se propaguen para generar desde caídas de procesos hasta corrupciones de resultados debidas a fallos no detectados. En este trabajo se describe la utilización de SEDAR, una herramienta que permite detectar fallos transitorios en aplicaciones MPI, y recuperar automáticamente las ejecuciones, posibilitando su finalización con resultados fiables. La detección se basa en replicación de procesamiento y monitorización del envío de mensajes y del cómputo local, mientras que la recuperación se logra utilizando múltiples checkpoints de capa de sistema. El estudio del comportamiento de SEDAR en presencia de fallos, inyectados en distintos momentos durante la ejecución, permite evaluar su desempeño y caracterizar el overhead asociado a su utilización. Las posibilidades de configurar el modo de uso, adaptándolo a los requerimientos de cobertura y máximo overhead permitido de un sistema particular, hacen de SEDAR una metodología factible y viable para la tolerancia a fallos transitorios en sistemas de HPC.Red de Universidades con Carreras en Informátic

    Recovery for sporadic operations on cloud applications

    Full text link
    Cloud-based systems get changed more frequently than traditional systems. These frequent changes involve sporadic operations such as installation and upgrade. Sporadic operations on cloud manipulate cloud resources and they are prone to unpredictable and inevitable failures largely due to cloud uncertainty. To recover from failures in sporadic operations on cloud, we need cloud operational recovery strategies. Existing operational recovery methods on cloud have several drawbacks, such as poor generalizability of the exception handling mechanism and the coarse-grained recovery manner of rollback mechanisms. Hence, this thesis proposes a novel and innovative recovery approach, called POD-Recovery, for sporadic operations on cloud. One novelty of POD-Recovery is that it is based on eight cloud operational recovery requirements formulated by us (e.g. recovery time objective satisfaction and recovery generalizability). Another novelty of POD-Recovery is that it is non-intrusive and does not modify the code which implements the sporadic operation. POD-Recovery works in the following innovative way: it first treats a sporadic operation as a process which provides the workflow of the operation and the contextual information for each operational step. Then, it identifies the recovery points (where failure detection and recovery should be performed) inside the sporadic operation, determines the unified resource space (the resource types required and manipulated by the sporadic operation), and generates the expected resource state templates (the abstraction level of resource states) for all operational steps. For a given recovery point inside the sporadic operation, POD-Recovery first filters the applicable recovery patterns from the eight recovery patterns it supports and then automatically generates the recovery actions for the applicable recovery patterns. Next, it evaluates the generated applicable recovery actions based on the metrics of Recovery Time, Recovery Cost and Recovery Impact. This quantitative evaluation leads to the selection of an acceptable recovery action for execution for a given recovery point. We implement POD-Recovery and evaluate it by recovering from faults injected into five representative types of sporadic operations on cloud. The experimental results show that POD-Recovery is able to perform operational recovery while satisfying all the recovery requirements and it improves on the existing recovery methods for cloud operations

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    The Fifth Workshop on HPC Best Practices: File Systems and Archives

    Full text link
    The workshop on High Performance Computing (HPC) Best Practices on File Systems and Archives was the fifth in a series sponsored jointly by the Department Of Energy (DOE) Office of Science and DOE National Nuclear Security Administration. The workshop gathered technical and management experts for operations of HPC file systems and archives from around the world. Attendees identified and discussed best practices in use at their facilities, and documented findings for the DOE and HPC community in this report

    Combining Checkpointing and Replication for Reliable Execution of Linear Workflows with Fail-Stop and Silent Errors

    Get PDF
    Large-scale platforms currently experience errors from two different sources, namely fail-stop errors (which interrupt the execution) and silent errors (which strike unnoticed and corrupt data). This work combines checkpointing and replication for the reliable execution of linear workflows on platforms subject to these two error types. While checkpointing and replication have been studied separately, their combination has not yet been investigated despite its promising potential to minimize the execution time of linear workflows in error-prone environments. Moreover, combined checkpointing and replication has not yet been studied in the presence of both fail-stop and silent errors. The combination raises new problems: for each task, we have to decide whether to checkpoint and/or replicate it to ensure its reliable execution. We provide an optimal dynamic programming algorithm of quadratic complexity to solve both problems. This dynamic programming algorithm has been validated through extensive simulations that reveal the conditions in which checkpointing only, replication only, or the combination of both techniques, lead to improved performance

    Managing Smartphone Testbeds with SmartLab

    Get PDF
    The explosive number of smartphones with ever growing sensing and computing capabilities have brought a paradigm shift to many traditional domains of the computing field. Re-programming smartphones and instrumenting them for application testing and data gathering at scale is currently a tedious and time-consuming process that poses significant logistical challenges. In this paper, we make three major contributions: First, we propose a comprehensive architecture, coined SmartLab1, for managing a cluster of both real and virtual smartphones that are either wired to a private cloud or connected over a wireless link. Second, we propose and describe a number of Android management optimizations (e.g., command pipelining, screen-capturing, file management), which can be useful to the community for building similar functionality into their systems. Third, we conduct extensive experiments and microbenchmarks to support our design choices providing qualitative evidence on the expected performance of each module comprising our architecture. This paper also overviews experiences of using SmartLab in a research-oriented setting and also ongoing and future development efforts

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore