4 research outputs found

    From geographically dispersed data centers towards hierarchical edge computing

    Get PDF
    Internet scale data centers are generally dispersed in different geographical regions. While the main goal of deploying the geographically dispersed data centers is to provide redundancy, scalability and high availability, the geographic dispersity provides another opportunity for efficient employment of global resources, e.g., utilizing price-diversity in electricity markets or utilizing locational diversity in renewable power generation. In other words, an efficient approach for geographical load balancing (GLB) across geo-dispersed data centers not only can maximize the utilization of green energy but also can minimize the cost of electricity. However, due to the different costs and disparate environmental impacts of the renewable energy and brown energy, such a GLB approach should tap on the merits of the separation of green energy utilization maximization and brown energy cost minimization problems. To this end, the notion of green workload and green service rate, versus brown workload and brown service rate, respectively, to facilitate the separation of green energy utilization maximization and brown energy cost minimization problems is proposed. In particular, a new optimization framework to maximize the profit of running geographically dispersed data centers based on the accuracy of the G/D/1 queueing model, and taking into consideration of multiple classes of service with individual service level agreement deadline for each type of service is developed. A new information flow graph based model for geo-dispersed data centers is also developed, and based on the developed model, the achievable tradeoff between total and brown power consumption is characterized. Recently, the paradigm of edge computing has been introduced to push the computing resources away from the data centers to the edge of the network, thereby reducing the communication bandwidth requirement between the sources of data and the data centers. However, it is still desirable to investigate how and where at the edge of the network the computation resources should be provisioned. To this end, a hierarchical Mobile Edge Computing (MEC) architecture in accordance with the principles of LTE Advanced backhaul network is proposed and an auction-based profit maximization approach which effectively facilitates the resource allocation to the subscribers of the MEC network is designed. A hierarchical capacity provisioning framework for MEC that optimally budgets computing capacities at different hierarchical edge computing levels is also designed. The proposed scheme can efficiently handle the peak loads at the access point locations while coping with the resource poverty at the edge. Moreover, the code partitioning problem is extended to a scheduling problem over time and the hierarchical mobile edge network, and accordingly, a new technique that leads to the optimal code partitioning in a reasonable time even for large-sized call trees is proposed. Finally, a novel NOMA augmented edge computing model that captures the gains of uplink NOMA in MEC users\u27 energy consumption is proposed

    Dual-battery empowered green cellular networks

    Get PDF
    With awareness of the potential harmful effects to the environment and climate change, on-grid brown energy consumption of information and communications technology (ICT) has drawn much attention. Cellular base stations (BSs) are among the major energy guzzlers in ICT, and their contributions to the global carbon emissions increase sustainedly. It is essential to leverage green energy to power BSs to reduce their on-grid brown energy consumption. However, in order to furthest save on-grid brown energy and decrease the on-grid brown energy electricity expenses, most existing green energy related works only pursue to maximize the green energy utilization while compromising the services received by the mobile users. In reality, dissatisfaction of services may eventually lead to loss of market shares and profits of the network providers. In this research, a dual-battery enabled profit driven user association scheme is introduced to jointly consider the traffic delivery latency and green energy utilization to maximize the profits for the network providers in heterogeneous cellular networks. Since this profit driven user association optimization problem is NP-hard, some heuristics are presented to solve the problem with low computational complexity. Finally, the performance of the proposed algorithm is validated through extensive simulations. In addition, the Internet of Things (IoT) heralds a vision of future Internet where all physical things/devices are connected via a network to promote a heightened level of awareness about our world and dramatically improve our daily lives. Nonetheless, most wireless technologies utilizing unlicensed bands cannot provision ubiquitous and quality IoT services. In contrast, cellular networks support large-scale, quality of service guaranteed, and secured communications. However, tremendous proximal communications via local BSs will lead to severe traffic congestion and huge energy consumption in conventional cellular networks. Device-to-device (D2D) communications can potentially offload traffic from and reduce energy consumption of BSs. In order to realize the vision of a truly global IoT, a novel architecture, i.e., overlay-based green relay assisted D2D communications with dual batteries in heterogeneous cellular networks, is introduced. By optimally allocating the network resource, the introduced resource allocation method provisions the IoT services and minimizes the overall energy consumption of the pico relay BSs. By balancing the residual green energy among the pico relay BSs, the green energy utilization is maximized; this furthest saves the on-grid energy. Finally, the performance of the proposed architecture is validated through extensive simulations. Furthermore, the mobile devices serve the important roles in cellular networks and IoT. With the ongoing worldwide development of IoT, an unprecedented number of edge devices imperatively consume a substantial amount of energy. The overall IoT mobile edge devices have been predicted to be the leading energy guzzler in ICT by 2020. Therefore, a three-step green IoT architecture is proposed, i.e., ambient energy harvesting, green energy wireless transfer and green energy balancing, in this research. The latter step reinforces the former one to ensure the availability of green energy. The basic design principles for these three steps are laid out and discussed. In summary, based on the dual-battery architecture, this dissertation research proposes solutions for the three aspects, i.e., green cellular BSs, green D2D communications and green devices, to hopefully and eventually actualize green cellular access networks, as part of the ongoing efforts in greening our society and environment

    Energy Saving in QoS Fog-supported Data Centers

    Get PDF
    One of the most important challenges that cloud providers face in the explosive growth of data is to reduce the energy consumption of their designed, modern data centers. The majority of current research focuses on energy-efficient resources management in the infrastructure as a service (IaaS) model through "resources virtualization" - virtual machines and physical machines consolidation. However, actual virtualized data centers are not supporting communication–computing intensive real-time applications, big data stream computing (info-mobility applications, real-time video co-decoding). Indeed, imposing hard-limits on the overall per-job computing-plus-communication delays forces the overall networked computing infrastructure to quickly adopt its resource utilization to the (possibly, unpredictable and abrupt) time fluctuations of the offered workload. Recently, Fog Computing centers are as promising commodities in Internet virtual computing platform that raising the energy consumption and making the critical issues on such platform. Therefore, it is expected to present some green solutions (i.e., support energy provisioning) that cover fog-supported delay-sensitive web applications. Moreover, the usage of traffic engineering-based methods dynamically keep up the number of active servers to match the current workload. Therefore, it is desirable to develop a flexible, reliable technological paradigm and resource allocation algorithm to pay attention the consumed energy. Furthermore, these algorithms could automatically adapt themselves to time-varying workloads, joint reconfiguration, and orchestration of the virtualized computing-plus-communication resources available at the computing nodes. Besides, these methods facilitate things devices to operate under real-time constraints on the allowed computing-plus-communication delay and service latency. The purpose of this thesis is: i) to propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, where we detail the main building blocks and services of the corresponding technological platform and protocol stack; ii) propose a dynamic and adaptive energy-aware algorithm that models and manages virtualized networked data centers Fog Nodes (FNs), to minimize the resulting networking-plus-computing average energy consumption; and, iii) propose a novel Software-as-a-Service (SaaS) Fog Computing platform to integrate the user applications over the FoE. The emerging utilization of SaaS Fog Computing centers as an Internet virtual computing commodity is to support delay-sensitive applications. The main blocks of the virtualized Fog node, operating at the Middleware layer of the underlying protocol stack and comprises of: i) admission control of the offered input traffic; ii) balanced control and dispatching of the admitted workload; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled Virtual Machines (VMs) instantiated onto the parallel computing platform; and, iv) rate control of the traffic injected into the TCP/IP connection. The salient features of this algorithm are that: i) it is adaptive and admits distributed scalable implementation; ii) it has the capacity to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rate of the traffic delivered to the client, instantaneous goodput and total processing delay; and, iii) it explicitly accounts for the dynamic interaction between computing and networking resources in order to maximize the resulting energy efficiency. Actual performance of the proposed scheduler in the presence of: i) client mobility; ii) wireless fading; iii) reconfiguration and two-thresholds consolidation costs of the underlying networked computing platform; and, iv) abrupt changes of the transport quality of the available TCP/IP mobile connection, is numerically tested and compared to the corresponding ones of some state-of-the-art static schedulers, under both synthetically generated and measured real-world workload traces
    corecore