398 research outputs found

    Fault aware task scheduling in cloud using min-min and DBSCAN

    Get PDF
    Cloud computing leverages computing resources by managing these resources globally in a more efficient manner as compared to individual resource services. It requires us to deliver the resources in a heterogeneous environment and also in a highly dynamic nature. Hence, there is always a risk of resource allocation failure that can maximize the delay in task execution. Such adverse impact in the cloud environment also raises questions on quality of service (QoS). Resource management for cloud application and service have bigger challenges and many researchers have proposed several solutions but there is room for improvement. Clustering the resources clustering and mapping them according to task can also be an option to deal with such task failure or mismanaged resource allocation. Density-based spatial clustering of applications with noise (DBSCAN) is a stochastic approach-based algorithm which has the capability to cluster the resources in a cloud environment. The proposed algorithm considers high execution enabled powerful data centers with least fault probability during resource allocation which reduces the probability of fault and increases the tolerance. The simulation is cone using CloudsSim 5.0 tool kit. The results show 25% average improve in execution time, 6.5% improvement in number of task completed and 3.48% improvement in count of task failed as compared to ACO, PSO, BB-BC (Bib = g bang Big Crunch) and WHO(Whale optimization algorithm)

    When energy trading meets blockchain in electrical power system: The state of the art

    Get PDF
    With the rapid growth of renewable energy resources, energy trading has been shifting from the centralized manner to distributed manner. Blockchain, as a distributed public ledger technology, has been widely adopted in the design of new energy trading schemes. However, there are many challenging issues in blockchain-based energy trading, e.g., low efficiency, high transaction cost, and security and privacy issues. To tackle these challenges, many solutions have been proposed. In this survey, the blockchain-based energy trading in the electrical power system is thoroughly investigated. Firstly, the challenges in blockchain-based energy trading are identified and summarized. Then, the existing energy trading schemes are studied and classified into three categories based on their main focuses: energy transaction, consensus mechanism, and system optimization. Blockchain-based energy trading has been a popular research topic, new blockchain architectures, models and products are continually emerging to overcome the limitations of existing solutions, forming a virtuous circle. The internal combination of different blockchain types and the combination of blockchain with other technologies improve the blockchain-based energy trading system to better satisfy the practical requirements of modern power systems. However, there are still some problems to be solved, for example, the lack of regulatory system, environmental challenges and so on. In the future, we will strive for a better optimized structure and establish a comprehensive security assessment model for blockchain-based energy trading system.This research was funded by Beijing Natural Science Foundation (grant number 4182060).Scopu

    Resource boxing: Converting realistic cloud task utilization patterns for theoretical scheduling

    Get PDF
    Scheduling is a core component within distributed systems to determine optimal allocation of tasks within servers. This is challenging within modern Cloud computing systems - comprising millions of tasks executing in thousands of heterogeneous servers. Theoretical scheduling is capable of providing complete and sophisticated algorithms towards a single objective function. However, Cloud computing systems pursue multiple and oftentimes conflicting objectives towards provisioning high levels of performance, availability, reliability and energy-efficiency. As a result, theoretical scheduling for Cloud computing is performed by simplifying assumptions for applicability. This is especially true for task utilization patterns, which fluctuate in practice yet are modelled as piecewise constant in theoretical scheduling models. While there exists work for modelling dynamic Cloud task patterns for evaluating applied scheduling, such models are incompatible with the inputs needed for theoretical scheduling - which require such patterns to be represented as boxes. Presently there exist no methods capable of accurately converting real task patterns derived from empirical data into boxes. This results in a significant gap towards theoreticians understanding and proposing algorithms derived from realistic assumptions towards enhanced Cloud scheduling. This work proposes resource boxing - an approach for automated conversion of realistic task patterns in Cloud computing directly into box-inputs for theoretical scheduling. We propose four resource conversion algorithms capable of accurately representing real task utilization patterns in the form of scheduling boxes. Algorithms were evaluated using production Cloud trace data, demonstrating a difference between real utilization and scheduling boxes less than 5%. We also provide an application for how resource boxing can be exploited to directly translate research from the applied community into the theoretical community

    Dependable mapreduce in a cloud-of-clouds

    Get PDF
    Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2017MapReduce is a simple and elegant programming model suitable for loosely coupled parallelization problems—problems that can be decomposed into subproblems. Hadoop MapReduce has become the most popular framework for performing large-scale computation on off-the-shelf clusters, and it is widely used to process these problems in a parallel and distributed fashion. This framework is highly scalable, can deal efficiently with large volumes of unstructured data, and it is a platform for many other applications. However, the framework has limitations concerning dependability. Namely, it is solely prepared to tolerate crash faults by re-executing tasks in case of failure, and to detect file corruptions using file checksums. Unfortunately, there is evidence that arbitrary faults do occur and can affect the correctness of MapReduce execution. Although such Byzantine faults are considered to be rare, particular MapReduce applications are critical and intolerant to this type of fault. Furthermore, typical MapReduce implementations are constrained to a single cloud environment. This is a problem as there is increasing evidence of outages on major cloud offerings, raising concerns about the dependence on a single cloud. In this thesis, I propose techniques to improve the dependability of MapReduce systems. The proposed solutions allow MapReduce to scale out computations to a multi-cloud environment, or cloud of-clouds, to tolerate arbitrary and malicious faults and cloud outages. The proposals have three important properties: they increase the dependability of MapReduce by tolerating the faults mentioned above; they require minimal or no modifications to users’ applications; and they achieve this increased level of fault tolerance at reasonable cost. To achieve these goals, I introduce three key ideas: minimizing the required replication; applying context-based job scheduling based on cloud and network conditions; and performing fine-grained replication. I evaluated all proposed solutions in real testbed environments running typical MapReduce applications. The results demonstrate interesting trade-offs concerning resilience and performance when compared to traditional methods. The fundamental conclusion is that the cost introduced by our solutions is small, and thus deemed acceptable for many critical applications.O MapReduce é um modelo de programação adequado para processar grandes volumes de dados em paralelo, executando um conjunto de tarefas independentes, e combinando os resultados parciais na solução final. OHadoop MapReduce é uma plataforma popular para processar grandes quantidades de dados de forma paralela e distribuída. Do ponto de vista da confiabilidade, a plataforma está preparada exclusivamente para tolerar faltas de paragem, re-executando tarefas, e detectar corrupções de ficheiros usando somas de verificação. Esta é uma importante limitação dado haver evidência de que faltas arbitrárias ocorrem e podem afetar a execução do MapReduce. Embora estas faltas Bizantinas sejam raras, certas aplicações de MapReduce são críticas e não toleram faltas deste tipo. Além disso, o número de ocorrências de interrupções em infraestruturas da nuvem tem vindo a aumentar ao longo dos anos, levantando preocupações sobre a dependência dos clientes num fornecedor único de serviços de nuvem. Nesta tese proponho várias técnicas para melhorar a confiabilidade do sistema MapReduce. As soluções propostas permitem processar tarefas MapReduce num ambiente de múltiplas nuvens para tolerar faltas arbitrárias, maliciosas e faltas de paragem nas nuvens. Estas soluções oferecem três importantes propriedades: toleram os tipos de faltas mencionadas; não exigem modificações às aplicações dos clientes; alcançam esta tolerância a faltas a um custo razoável. Estas técnicas são baseadas nas seguintes ideias: minimizar a replicação, desenvolver algoritmos de escalonamento para o MapReduce baseados nas condições da nuvem e da rede, e criar um sistema de tolerância a faltas com granularidade fina no que respeita à replicação. Avaliei as minhas propostas em ambientes de teste real com aplicações comuns do MapReduce, que me permite demonstrar compromissos interessantes em termos de resiliência e desempenho, quando comparados com métodos tradicionais. Em particular, os resultados mostram que o custo introduzido pelas soluções são aceitáveis para muitas aplicações críticas
    corecore