2,107 research outputs found

    A proactive fault tolerance framework for high performance computing (HPC) systems in the cloud

    Get PDF
    High Performance Computing (HPC) systems have been widely used by scientists and researchers in both industry and university laboratories to solve advanced computation problems. Most advanced computation problems are either data-intensive or computation-intensive. They may take hours, days or even weeks to complete execution. For example, some of the traditional HPC systems computations run on 100,000 processors for weeks. Consequently traditional HPC systems often require huge capital investments. As a result, scientists and researchers sometimes have to wait in long queues to access shared, expensive HPC systems. Cloud computing, on the other hand, offers new computing paradigms, capacity, and flexible solutions for both business and HPC applications. Some of the computation-intensive applications that are usually executed in traditional HPC systems can now be executed in the cloud. Cloud computing price model eliminates huge capital investments. However, even for cloud-based HPC systems, fault tolerance is still an issue of growing concern. The large number of virtual machines and electronic components, as well as software complexity and overall system reliability, availability and serviceability (RAS), are factors with which HPC systems in the cloud must contend. The reactive fault tolerance approach of checkpoint/restart, which is commonly used in HPC systems, does not scale well in the cloud due to resource sharing and distributed systems networks. Hence, the need for reliable fault tolerant HPC systems is even greater in a cloud environment. In this thesis we present a proactive fault tolerance approach to HPC systems in the cloud to reduce the wall-clock execution time, as well as dollar cost, in the presence of hardware failure. We have developed a generic fault tolerance algorithm for HPC systems in the cloud. We have further developed a cost model for executing computation-intensive applications on HPC systems in the cloud. Our experimental results obtained from a real cloud execution environment show that the wall-clock execution time and cost of running computation-intensive applications in the cloud can be considerably reduced compared to checkpoint and redundancy techniques used in traditional HPC systems

    Reliable and energy efficient resource provisioning in cloud computing systems

    Get PDF
    Cloud Computing has revolutionized the Information Technology sector by giving computing a perspective of service. The services of cloud computing can be accessed by users not knowing about the underlying system with easy-to-use portals. To provide such an abstract view, cloud computing systems have to perform many complex operations besides managing a large underlying infrastructure. Such complex operations confront service providers with many challenges such as security, sustainability, reliability, energy consumption and resource management. Among all the challenges, reliability and energy consumption are two key challenges focused on in this thesis because of their conflicting nature. Current solutions either focused on reliability techniques or energy efficiency methods. But it has been observed that mechanisms providing reliability in cloud computing systems can deteriorate the energy consumption. Adding backup resources and running replicated systems provide strong fault tolerance but also increase energy consumption. Reducing energy consumption by running resources on low power scaling levels or by reducing the number of active but idle sitting resources such as backup resources reduces the system reliability. This creates a critical trade-off between these two metrics that are investigated in this thesis. To address this problem, this thesis presents novel resource management policies which target the provisioning of best resources in terms of reliability and energy efficiency and allocate them to suitable virtual machines. A mathematical framework showing interplay between reliability and energy consumption is also proposed in this thesis. A formal method to calculate the finishing time of tasks running in a cloud computing environment impacted with independent and correlated failures is also provided. The proposed policies adopted various fault tolerance mechanisms while satisfying the constraints such as task deadlines and utility values. This thesis also provides a novel failure-aware VM consolidation method, which takes the failure characteristics of resources into consideration before performing VM consolidation. All the proposed resource management methods are evaluated by using real failure traces collected from various distributed computing sites. In order to perform the evaluation, a cloud computing framework, 'ReliableCloudSim' capable of simulating failure-prone cloud computing systems is developed. The key research findings and contributions of this thesis are: 1. If the emphasis is given only to energy optimization without considering reliability in a failure prone cloud computing environment, the results can be contrary to the intuitive expectations. Rather than reducing energy consumption, a system ends up consuming more energy due to the energy losses incurred because of failure overheads. 2. While performing VM consolidation in a failure prone cloud computing environment, a significant improvement in terms of energy efficiency and reliability can be achieved by considering failure characteristics of physical resources. 3. By considering correlated occurrence of failures during resource provisioning and VM allocation, the service downtime or interruption is reduced significantly by 34% in comparison to the environments with the assumption of independent occurrence of failures. Moreover, measured by our mathematical model, the ratio of reliability and energy consumption is improved by 14%

    Data Replication and Its Alignment with Fault Management in the Cloud Environment

    Get PDF
    Nowadays, the exponential data growth becomes one of the major challenges all over the world. It may cause a series of negative impacts such as network overloading, high system complexity, and inadequate data security, etc. Cloud computing is developed to construct a novel paradigm to alleviate massive data processing challenges with its on-demand services and distributed architecture. Data replication has been proposed to strategically distribute the data access load to multiple cloud data centres by creating multiple data copies at multiple cloud data centres. A replica-applied cloud environment not only achieves a decrease in response time, an increase in data availability, and more balanced resource load but also protects the cloud environment against the upcoming faults. The reactive fault tolerance strategy is also required to handle the faults when the faults already occurred. As a result, the data replication strategies should be aligned with the reactive fault tolerance strategies to achieve a complete management chain in the cloud environment. In this thesis, a data replication and fault management framework is proposed to establish a decentralised overarching management to the cloud environment. Three data replication strategies are firstly proposed based on this framework. A replica creation strategy is proposed to reduce the total cost by jointly considering the data dependency and the access frequency in the replica creation decision making process. Besides, a cloud map oriented and cost efficiency driven replica creation strategy is proposed to achieve the optimal cost reduction per replica in the cloud environment. The local data relationship and the remote data relationship are further analysed by creating two novel data dependency types, Within-DataCentre Data Dependency and Between-DataCentre Data Dependency, according to the data location. Furthermore, a network performance based replica selection strategy is proposed to avoid potential network overloading problems and to increase the number of concurrent-running instances at the same time

    Proactive cloud service assurance framework for fault remediation in cloud environment

    Get PDF
    Cloud resiliency is an important issue in successful implementation of cloud computing systems. Handling cloud faults proactively, with a suitable remediation technique having minimum cost is an important requirement for a fault management system. The selection of best applicable remediation technique is a decision making problem and considers parameters such as i) Impact of remediation technique ii) Overhead of remediation technique ii) Severity of fault and iv) Priority of the application. This manuscript proposes an analytical model to measure the effectiveness of a remediation technique for various categories of faults, further it demonstrates the implementation of an efficient fault remediation system using a rule-based expert system. The expert system is designed to compute an utility value for each remediation technique in a novel way and select the best remediation technique from its knowledgebase. A prototype is developed for experimentation purpose and the results shows improved availability with less overhead as compared to a reactive fault management system

    Online disturbance prediction for enhanced availability in smart grids

    Get PDF
    A gradual move in the electric power industry towards Smart Grids brings new challenges to the system's efficiency and dependability. With a growing complexity and massive introduction of renewable generation, particularly at the distribution level, the number of faults and, consequently, disturbances (errors and failures) is expected to increase significantly. This threatens to compromise grid's availability as traditional, reactive management approaches may soon become insufficient. On the other hand, with grids' digitalization, real-time status data are becoming available. These data may be used to develop advanced management and control methods for a sustainable, more efficient and more dependable grid. A proactive management approach, based on the use of real-time data for predicting near-future disturbances and acting in their anticipation, has already been identified by the Smart Grid community as one of the main pillars of dependability of the future grid. The work presented in this dissertation focuses on predicting disturbances in Active Distributions Networks (ADNs) that are a part of the Smart Grid that evolves the most. These are distribution networks with high share of (renewable) distributed generation and with systems in place for real-time monitoring and control. Our main goal is to develop a methodology for proactive network management, in a sense of proactive mitigation of disturbances, and to design and implement a method for their prediction. We focus on predicting voltage sags as they are identified as one of the most frequent and severe disturbances in distribution networks. We address Smart Grid dependability in a holistic manner by considering its cyber and physical aspects. As a result, we identify Smart Grid dependability properties and develop a taxonomy of faults that contribute to better understanding of the overall dependability of the future grid. As the process of grid's digitization is still ongoing there is a general problem of a lack of data on the grid's status and especially disturbance-related data. These data are necessary to design an accurate disturbance predictor. To overcome this obstacle we introduce a concept of fault injection to simulation of power systems. We develop a framework to simulate a behavior of distribution networks in the presence of faults, and fluctuating generation and load that, alone or combined, may cause disturbances. With the framework we generate a large set of data that we use to develop and evaluate a voltage-sag disturbance predictor. To quantify how prediction and proactive mitigation of disturbances enhance availability we create an availability model of a proactive management. The model is generic and may be applied to evaluate the effect of proactive management on availability in other types of systems, and adapted for quantifying other types of properties as well. Also, we design a metric and a method for optimizing failure prediction to maximize availability with proactive approach. In our conclusion, the level of availability improvement with proactive approach is comparable to the one when using high-reliability and costly components. Following the results of the case study conducted for a 14-bus ADN, grid's availability may be improved by up to an order of magnitude if disturbances are managed proactively instead of reactively. The main results and contributions may be summarized as follows: (i) Taxonomy of faults in Smart Grid has been developed; (ii) Methodology and methods for proactive management of disturbances have been proposed; (iii) Model to quantify availability with proactive management has been developed; (iv) Simulation and fault-injection framework has been designed and implemented to generate disturbance-related data; (v) In the scope of a case study, a voltage-sag predictor, based on machine- learning classification algorithms, has been designed and the effect of proactive disturbance management on downtime and availability has been quantified

    Opportunities of IoT in Fog Computing for High Fault Tolerance and Sustainable Energy Optimization

    Get PDF
    Today, the importance of enhanced quality of service and energy optimization has promoted research into sensor applications such as pervasive health monitoring, distributed computing, etc. In general, the resulting sensor data are stored on the cloud server for future processing. For this purpose, recently, the use of fog computing from a real-world perspective has emerged, utilizing end-user nodes and neighboring edge devices to perform computation and communication. This paper aims to develop a quality-of-service-based energy optimization (QoS-EO) scheme for the wireless sensor environments deployed in fog computing. The fog nodes deployed in specific geographical areas cover the sensor activity performed in those areas. The logical situation of the entire system is informed by the fog nodes, as portrayed. The implemented techniques enable services in a fog-collaborated WSN environment. Thus, the proposed scheme performs quality-of-service placement and optimizes the network energy. The results show a maximum turnaround time of 8 ms, a minimum turnaround time of 1 ms, and an average turnaround time of 3 ms. The costs that were calculated indicate that as the number of iterations increases, the path cost value decreases, demonstrating the efficacy of the proposed technique. The CPU execution delay was reduced to a minimum of 0.06 s. In comparison, the proposed QoS-EO scheme has a lower network usage of 611,643.3 and a lower execution cost of 83,142.2. Thus, the results show the best cost estimation, reliability, and performance of data transfer in a short time, showing a high level of network availability, throughput, and performance guarantee

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft
    • …
    corecore