31 research outputs found

    Novel Load Balancing Optimization Algorithm to Improve Quality-of-Service in Cloud Environment

    Get PDF
    Scheduling cloud resources calls for allocating cloud assets to cloud tasks. It is possible to improve scheduling outcomes by treating Quality of Service (QoS) factors as essential constraints. However, efficient scheduling calls for improved optimization of QoS parameters, and only a few resource scheduling algorithms in the available literature do so. The primary objective of this paper is to provide an effective method for deploying workloads to cloud infrastructure. To ensure that workloads are executed efficiently on available resources, a resource scheduling method based on particle swarm optimization was developed. The proposed method's performance has been measured in the cloud. The experimental results prove the efficiency of the proposed approach in reducing the aforementioned QoS parameters. Several metrics of algorithm performance are used to gauge how well the algorithm performs

    Toward Bio-Inspired Auto-Scaling Algorithms: An Elasticity Approach for Container Orchestration Platforms

    Full text link
    (c) 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.[EN] The wide adoption of microservices architectures has introduced an unprecedented granularisation of computing that requires the coordinated execution of multiple containers with diverse lifetimes and with potentially different auto-scaling requirements. These applications are managed by means of container orchestration platforms and existing centralised approaches for auto-scaling face challenges when used for the timely adaptation of the elasticity required for the different application components. This paper studies the impact of integrating bio-inspired approaches for dynamic distributed auto-scaling on container orchestration platforms. With a focus on running self-managed containers, we compare alternative configuration options for the container life cycle. The performance of the proposed models is validated through simulations subjected to both synthetic and real-world workloads. Also, multiple scaling options are assessed with the purpose of identifying exceptional cases and improvement areas. Furthermore, a nontraditional metric for scaling measurement is introduced to substitute classic analytical approaches. We found out connections for two related worlds (biological systems and software container elasticity procedures) and we open a new research area in software containers that features potential self-guided container elasticity activities.This work was supported by the Ministerio de EconomĂ­a, Industria y Competitividad, Spanish Government, for the Project BigCLOE under Grant TIN2016-79951-RHerrera, J.; MoltĂł, G. (2020). Toward Bio-Inspired Auto-Scaling Algorithms: An Elasticity Approach for Container Orchestration Platforms. IEEE Access. 8:52139-52150. https://doi.org/10.1109/ACCESS.2020.2980852S5213952150

    Improving Data-sharing and Policy Compliance in a Hybrid Cloud:The Case of a Healthcare Provider

    Get PDF

    Designing a scalable dynamic load -balancing algorithm for pipelined single program multiple data applications on a non-dedicated heterogeneous network of workstations

    Get PDF
    Dynamic load balancing strategies have been shown to be the most critical part of an efficient implementation of various applications on large distributed computing systems. The need for dynamic load balancing strategies increases when the underlying hardware is a non-dedicated heterogeneous network of workstations (HNOW). This research focuses on the single program multiple data (SPMD) programming model as it has been extensively used in parallel programming for its simplicity and scalability in terms of computational power and memory size.;This dissertation formally defines and addresses the problem of designing a scalable dynamic load-balancing algorithm for pipelined SPMD applications on non-dedicated HNOW. During this process, the HNOW parameters, SPMD application characteristics, and load-balancing performance parameters are identified.;The dissertation presents a taxonomy that categorizes general load balancing algorithms and a methodology that facilitates creating new algorithms that can harness the HNOW computing power and still preserve the scalability of the SPMD application.;The dissertation devises a new algorithm, DLAH (Dynamic Load-balancing Algorithm for HNOW). DLAH is based on a modified diffusion technique, which incorporates the HNOW parameters. Analytical performance bound for the worst-case scenario of the diffusion technique has been derived.;The dissertation develops and utilizes an HNOW simulation model to conduct extensive simulations. These simulations were used to validate DLAH and compare its performance to related dynamic algorithms. The simulations results show that DLAH algorithm is scalable and performs well for both homogeneous and heterogeneous networks. Detailed sensitivity analysis was conducted to study the effects of key parameters on performance

    Improving software middleboxes and datacenter task schedulers

    Get PDF
    Over the last decades, shared systems have contributed to the popularity of many technologies. From Operating Systems to the Internet, they have all brought significant cost savings by allowing the underlying infrastructure to be shared. A common challenge in these systems is to ensure that resources are fairly divided without compromising utilization efficiency. In this thesis, we look at problems in two shared systems—software middleboxes and datacenter task schedulers—and propose ways of improving both efficiency and fairness. We begin by presenting Sprayer, a system that uses packet spraying to load balance packets to cores in software middleboxes. Sprayer eliminates the imbalance problems of per-flow solutions and addresses the new challenges of handling shared flow state that come with packet spraying. We show that Sprayer significantly improves fairness and seamlessly uses the entire capacity, even when there is a single flow in the system. After that, we present Stateful Dominant Resource Fairness (SDRF), a task scheduling policy for datacenters that looks at past allocations and enforces fairness in the long run. We prove that SDRF keeps the fundamental properties of DRF—the allocation policy it is built on—while benefiting users with lower usage. To efficiently implement SDRF, we also introduce live tree, a general-purpose data structure that keeps elements with predictable time-varying priorities sorted. Our trace-driven simulations indicate that SDRF reduces users’ waiting time on average. This improves fairness, by increasing the number of completed tasks for users with lower demands, with small impact on high-demand users.Nas últimas décadas, sistemas compartilhados contribuíram para a popularidade de muitas tecnologias. Desde Sistemas Operacionais até a Internet, esses sistemas trouxeram economias significativas ao permitir que a infraestrutura subjacente fosse compartilhada. Um desafio comum a esses sistemas é garantir que os recursos sejam divididos de forma justa, sem comprometer a eficiência de utilização. Esta dissertação observa problemas em dois sistemas compartilhados distintos—middleboxes em software e escalonadores de tarefas de datacenters—e propõe maneiras de melhorar tanto a eficiência como a justiça. Primeiro é apresentado o sistema Sprayer, que usa espalhamento para direcionar pacotes entre os núcleos em middleboxes em software. O Sprayer elimina os problemas de desbalanceamento causados pelas soluções baseadas em fluxos e lida com os novos desafios de manipular estados de fluxo, consequentes do espalhamento de pacotes. É mostrado que o Sprayer melhora a justiça de forma significativa e consegue usar toda a capacidade, mesmo quando há apenas um fluxo no sistema. Depois disso, é apresentado o SDRF, uma política de alocação de tarefas para datacenters que considera as alocações passadas e garante justiça ao longo do tempo. Prova-se que o SDRF mantém as propriedades fundamentais do DRF—a política de alocação em que ele se baseia—enquanto beneficia os usuários com menor utilização. Para implementar o SDRF de forma eficiente, também é introduzida a árvore viva, uma estrutura de dados genérica que mantém ordenados elementos cujas prioridades variam com o tempo. Simulações com dados reais indicam que o SDRF reduz o tempo de espera na média. Isso melhora a justiça, ao aumentar o número de tarefas completas dos usuários com menor demanda, tendo um impacto pequeno nos usuários de maior demanda

    Proceedings of the Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015) Krakow, Poland

    Get PDF
    Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015

    Acta Cybernetica : Volume 12. Number 1.

    Get PDF

    Helmholtz Portfolio Theme Large-Scale Data Management and Analysis (LSDMA)

    Get PDF
    The Helmholtz Association funded the "Large-Scale Data Management and Analysis" portfolio theme from 2012-2016. Four Helmholtz centres, six universities and another research institution in Germany joined to enable data-intensive science by optimising data life cycles in selected scientific communities. In our Data Life cycle Labs, data experts performed joint R&D together with scientific communities. The Data Services Integration Team focused on generic solutions applied by several communities
    corecore