7 research outputs found

    Challenges for orchestration and instance selection of composite services in distributed edge clouds

    Get PDF
    Today's centralized cloud-computing infrastructures have not been designed with geo-localized, personalized, bandwidth/processing-intensive, real-time applications in mind. High network delay and low throughput can have a significant impact on the user experience. Instead, such services could be deployed in distributed service nodes at the edge of the network, closer to the user. In this paper we focus on composite services of which the components are running in different service nodes. We present a two-layer framework that provides service orchestration and instance selection. We present the orchestration mechanisms to enable the flexible re-use of components across different composite services. For the resolution layer of our framework, we present two modes of operation that combine network and service availability information for efficient per-request instance selection among a multitude of service replicas

    Towards incentive-compatible pricing for bandwidth reservation in community network clouds

    No full text
    Community network clouds provide for applications of local interest deployed within community networks through collaborative efforts to provision cloud infrastructures. They complement the traditional large-scale public cloud providers similar to the model of decentralised edge clouds by bringing both content and computation closer to the users at the edges of the network. Services and applications within community network clouds require connectivity to the Internet and to the resources external to the community network, and here the current besteffort model of volunteers contributing gateway access in the community networks falls short. We model the problem of reserving the bandwidth at such gateways for guaranteeing quality-of-service for the cloud applications, and evaluate different pricing mechanisms for their suitability in ensuring maximal social welfare and eliciting truthful requests from the users. We find second-price auction based mechanisms, including Vickrey and generalised second price auctions, suitable for the bandwidth allocation problem at the gateways in the community networks.Peer ReviewedPostprint (author's final draft

    Carbon-Aware Load Balancing for Geo-distributed Cloud Services

    Full text link

    From geographically dispersed data centers towards hierarchical edge computing

    Get PDF
    Internet scale data centers are generally dispersed in different geographical regions. While the main goal of deploying the geographically dispersed data centers is to provide redundancy, scalability and high availability, the geographic dispersity provides another opportunity for efficient employment of global resources, e.g., utilizing price-diversity in electricity markets or utilizing locational diversity in renewable power generation. In other words, an efficient approach for geographical load balancing (GLB) across geo-dispersed data centers not only can maximize the utilization of green energy but also can minimize the cost of electricity. However, due to the different costs and disparate environmental impacts of the renewable energy and brown energy, such a GLB approach should tap on the merits of the separation of green energy utilization maximization and brown energy cost minimization problems. To this end, the notion of green workload and green service rate, versus brown workload and brown service rate, respectively, to facilitate the separation of green energy utilization maximization and brown energy cost minimization problems is proposed. In particular, a new optimization framework to maximize the profit of running geographically dispersed data centers based on the accuracy of the G/D/1 queueing model, and taking into consideration of multiple classes of service with individual service level agreement deadline for each type of service is developed. A new information flow graph based model for geo-dispersed data centers is also developed, and based on the developed model, the achievable tradeoff between total and brown power consumption is characterized. Recently, the paradigm of edge computing has been introduced to push the computing resources away from the data centers to the edge of the network, thereby reducing the communication bandwidth requirement between the sources of data and the data centers. However, it is still desirable to investigate how and where at the edge of the network the computation resources should be provisioned. To this end, a hierarchical Mobile Edge Computing (MEC) architecture in accordance with the principles of LTE Advanced backhaul network is proposed and an auction-based profit maximization approach which effectively facilitates the resource allocation to the subscribers of the MEC network is designed. A hierarchical capacity provisioning framework for MEC that optimally budgets computing capacities at different hierarchical edge computing levels is also designed. The proposed scheme can efficiently handle the peak loads at the access point locations while coping with the resource poverty at the edge. Moreover, the code partitioning problem is extended to a scheduling problem over time and the hierarchical mobile edge network, and accordingly, a new technique that leads to the optimal code partitioning in a reasonable time even for large-sized call trees is proposed. Finally, a novel NOMA augmented edge computing model that captures the gains of uplink NOMA in MEC users\u27 energy consumption is proposed

    A Cooperative Game Based Allocation for Sharing Data Center Networks

    No full text
    Abstract—In current IaaS datacenters, tenants are suffering unfairness since the network bandwidth is shared in a besteffort manner. To achieve predictable network performance for rented virtual machines (VMs), cloud providers should guarantee minimum bandwidth for VMs or allocate the network bandwidth in a fairness fashion at VM-level. At the same time, the network should be efficiently utilized in order to maximize cloud providers ’ revenue. In this paper, we model the bandwidth sharing problem as a Nash bargaining game, and propose the allocation principles by defining a tunable base bandwidth for each VM. Specifically, we guarantee bandwidth for those VMs with lower network rates than their base bandwidth, while maintaining fairness among other VMs with higher network rates than their base bandwidth. Based on rigorous cooperative game-theoretic approaches, we design a distributed algorithm to achieve efficient and fair bandwidth allocation corresponding to the Nash bargaining solution (NBS). With simulations under typical scenarios, we show that our strategy can meet the two desirable requirements towards predictable performance for tenants as well as high utilization for providers. And by tuning the base bandwidth, our solution can enable cloud providers to flexibly balance the tradeoff between minimum guarantees and fair sharing of datacenter networks. I
    corecore