708 research outputs found

    Optimized Contract-based Model for Resource Allocation in Federated Geo-distributed Clouds

    Get PDF
    In the era of Big Data, with data growing massively in scale and velocity, cloud computing and its pay-as-you-go modelcontinues to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scaleand large-scale geo-distributed datacenters operated and managed by individual Cloud Service Providers (CSPs) raises newchallenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resourcestowards a globally efficient resource allocation model. Earlier solutions for geo-distributed clouds have focused primarily on achievingglobal efficiency in resource sharing, that although tries to maximize the global resource allocation, results in significant inefficiencies inlocal resource allocation for individual datacenters and individual cloud provi ders leading to unfairness in their revenue and profitearned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows CSPsto establish resource sharing contracts with individual datacentersapriorifor defined time intervals during a 24 hour time period. Based on the established contracts, individual CSPs employ a contracts cost and duration aware job scheduling and provisioning algorithm that enables jobs to complete and meet their response time requirements while achieving both global resource allocation efficiency and local fairness in the profit earned. The proposed techniques are evaluated through extensive experiments using realistic workloads generated using the SHARCNET cluster trace. The experiments demonstrate the effectiveness, scalability and resource sharing fairness of the proposed model

    Zenith: Utility-Aware Resource Allocation for Edge Computing

    Get PDF
    In the Internet of Things(IoT) era, the demands for low-latency computing for time-sensitive applications (e.g., location-based augmented reality games, real-time smart grid management, real-time navigation using wearables) has been growing rapidly. Edge Computing provides an additional layer of infrastructure to fill latency gaps between the IoT devices and the back-end computing infrastructure. In the edge computing model, small-scale micro-datacenters that represent ad-hoc and distributed collection of computing infrastructure pose new challenges in terms of management and effective resource sharing to achieve a globally efficient resource allocation. In this paper, we propose Zenith, a novel model for allocating computing resources in an edge computing platform that allows service providers to establish resource sharing contracts with edge infrastructure providers apriori. Based on the established contracts, service providers employ a latency-aware scheduling and resource provisioning algorithm that enables tasks to complete and meet their latency requirements. The proposed techniques are evaluated through extensive experiments that demonstrate the effectiveness, scalability and performance efficiency of the proposed model

    Holistic Resource Management for Sustainable and Reliable Cloud Computing:An Innovative Solution to Global Challenge

    Get PDF
    Minimizing the energy consumption of servers within cloud computing systems is of upmost importance to cloud providers towards reducing operational costs and enhancing service sustainability by consolidating services onto fewer active servers. Moreover, providers must also provision high levels of availability and reliability, hence cloud services are frequently replicated across servers that subsequently increases server energy consumption and resource overhead. These two objectives can present a potential conflict within cloud resource management decision making that must balance between service consolidation and replication to minimize energy consumption whilst maximizing server availability and reliability, respectively. In this paper, we propose a cuckoo optimization-based energy-reliability aware resource scheduling technique (CRUZE) for holistic management of cloud computing resources including servers, networks, storage, and cooling systems. CRUZE clusters and executes heterogeneous workloads on provisioned cloud resources and enhances the energy-efficiency and reduces the carbon footprint in datacenters without adversely affecting cloud service reliability. We evaluate the effectiveness of CRUZE against existing state-of-the-art solutions using the CloudSim toolkit. Results indicate that our proposed technique is capable of reducing energy consumption by 20.1% whilst improving reliability and CPU utilization by 17.1% and 15.7% respectively without affecting other Quality of Service parameters

    Applying reinforcement learning to network management

    Get PDF
    This project seeks to find if in the actual scenario Reinforcement Learning could help Vehicle Networks to get better performances, concretely applied in the field of resource allocation. It would be tried to allocate a varied number of requests in a network with multiple datacenters, modeling an actual road and city track. To do so, 4 algorithms were implemented, a heuristic and 3 RL approaches, in which we defined a simple DQN and the remaining two that run the same DQN but also include a parameter sharing method. It will be seen that a more sophisticated model must be done in order to demonstrate that Reinforcement Learning is worthwhile, and also, that parameter sharing is a tool that would be very useful for these types of networks as it could work in a very efficient manner.Este proyecto busca encontrar si en un escenario real, el aprendizaje por refuerzo podría ayudar a las nuevas redes de vehículos a obtener mejores rendimientos, aplicadas concretamente en el ámbito de la asignación de recursos. Esto se intentaría hacer asignando un número variado de peticiones a varios centros de datos, modelando una carretera real y un tramo de ciudad. Para ello, se implementaron 4 algoritmos, una heurística y 3 enfoques RL, en los que definimos un DQN simple y los dos restantes que ejecutan el propio DQN, pero también incluyen un método para compartir parámetros. Se verá que debe hacerse un modelo sofisticado para demostrar que el aprendizaje por refuerzo vale la pena, y también, que compartir parámetros es una herramienta que será muy útil para este tipo de redes ya que podría funcionar de una manera muy eficiente.Aquest projecte busca trobar si en un escenari real, l'aprenentatge per reforç podria ajudar a les noves xarxes de vehicles a obtenir millors rendiments, aplicades concretament en l'àmbit de l'assignació de recursos. Això s'intentaria fer assignant un nombre variat de peticions a diversos centres de dades, modelant una carretera real i un tram de ciutat. Per fer-ho, es van implementar 4 algorismes, una heurística i 3 enfocaments RL, en els quals vam definir un DQN simple i els dos restants que executen el mateix DQN, però també inclouen un mètode per compartir paràmetres. Es veurà que s'ha de fer un model sofisticat per demostrar que l'aprenentatge per reforç val la pena, i també, que compartir paràmetres és una eina que serà molt útil per a aquests tipus de xarxes ja que podria funcionar d'una manera molt eficient

    Resource provisioning in Science Clouds: Requirements and challenges

    Full text link
    Cloud computing has permeated into the information technology industry in the last few years, and it is emerging nowadays in scientific environments. Science user communities are demanding a broad range of computing power to satisfy the needs of high-performance applications, such as local clusters, high-performance computing systems, and computing grids. Different workloads are needed from different computational models, and the cloud is already considered as a promising paradigm. The scheduling and allocation of resources is always a challenging matter in any form of computation and clouds are not an exception. Science applications have unique features that differentiate their workloads, hence, their requirements have to be taken into consideration to be fulfilled when building a Science Cloud. This paper will discuss what are the main scheduling and resource allocation challenges for any Infrastructure as a Service provider supporting scientific applications
    • …
    corecore