6 research outputs found

    Impact of the Net Neutrality Repeal on Communication Networks

    Get PDF
    Network neutrality is the principle of treating equally all Internet traffic regardless of its source, destination, content, application or other related distinguishing metrics. Under net neutrality, Internet service providers (ISPs) are compelled to charge all content providers (CPs) the same per Gbps rate despite the growing profit achieved by CPs. In this paper, we study the impact of the repeal of net neutrality on communication networks by developing a techno-economic Mixed Integer Linear Programming (MILP) model to maximize the potential profit ISPs can achieve by offering their services to CPs. We consider an ISP that offers CPs different classes of service representing typical video content qualities. The MILP model maximizes the ISP profit by optimizing the prices of the different classes according to the users’ demand sensitivity to the change in price, referred to as Price Elasticity of Demand (PED). We analyze how PED impacts the profit in different CP delivery scenarios in cloud-fog architectures. The results show that the repeal of net neutrality can potentially increase ISPs profit by a factor of 8 with a pricing scheme that discriminates against data intensive content. Also, the repeal of net neutrality positively impacts the network energy efficiency by reducing the core network power consumption by 55% as a result of suppressing data intensive content compared to the net neutrality scenario

    Energy Efficient Resource Allocation in Federated Fog Computing Networks

    Get PDF
    There is a continuous growth in demand for time sensitive applications which has shifted the cloud paradigm from a centralized computing architecture towards distributed heterogeneous computing platforms where resources located at the edge of the network are used to provide cloud-like services. This paradigm is widely known as fog computing. Virtual machines (VMs) have been widely utilized in both paradigms to enhance the network scalability, improve resource utilization, and energy efficiency. Moreover, Passive Optical Networks (PON s) are a technology suited to handling the enormous volumes of data generated in the access network due to their energy efficiency and large bandwidth. In this paper, we utilize a PON to provide the connectivity between multiple distributed fog units to achieve federated (i.e., cooperative) computing units in the access network to serve intensive demands. We propose a mixed integer linear program (MILP) to optimize the VM placement in the federated fog computing units with the objective of minimizing the total power consumption while considering inter- Vmtraffic. The results show a significant power saving as a result of the proposed optimization model by up to 52%, in the VM -allocation compared to a baseline approach that allocates the VM requests while neglecting the power consumption and inter-VMs traffic in the optimization framework

    Energy Minimized Federated Fog Computing over Passive Optical Networks

    Get PDF
    The rapid growth of time-sensitive applications and services has driven enhancements to computing infrastructures. The main challenge that needs addressing for these applications is the optimal placement of the end-users’ demands to reduce the total power consumption and delay. One of the widely adopted paradigms to address such a challenge is fog computing. Placing fog units close to end-users at the edge of the network can help mitigate some of the latency and energy efficiency issues. Compared to the traditional hyperscale cloud data centres, fog computing units are constrained by computational power, hence, the capacity of fog units plays a critical role in meeting the stringent demands of the end-users due to intensive processing workloads. In this paper, we first propose a federated fog computing architecture where multiple distributed fog cells collaborate in serving users. These fog cells are connected through dedicated Passive Optical Network (PON) connections. We then aim to optimize the placement of virtual machines (VMs) demands originating from the end-users by formulating a Mixed Integer Linear Programming (MILP) model to minimize the total power consumption. The results show an increase in processing capacity and a reduction in the power consumption by up to 26% compared to a Non-Federated fogs computing architecture

    Energy-Efficient Distributed Machine Learning in Cloud Fog Networks

    Get PDF
    Massive amounts of data are expected to be generated by the billions of objects that form the Internet of Things (IoT). A variety of automated services such as monitoring will largely depend on the use of different Machine Learning (ML) algorithms. Traditionally, ML models are processed by centralized cloud data centers, where IoT readings are offloaded to the cloud via multiple networking hops in the access, metro, and core layers. This approach will inevitably lead to excessive networking power consumptions as well as Quality-of-Service (QoS) degradation such as increased latency. Instead, in this paper, we propose a distributed ML approach where the processing can take place in intermediary devices such as IoT nodes and fog servers in addition to the cloud. We abstract the ML models into Virtual Service Requests (VSRs) to represent multiple interconnected layers of a Deep Neural Network (DNN). Using Mixed Integer Linear Programming (MILP), we design an optimization model that allocates the layers of a DNN in a Cloud/Fog Network (CFN) in an energy efficient way. We evaluate the impact of DNN input distribution on the performance of the CFN and compare the energy efficiency of this approach to the baseline where all layers of DNNs are processed in the centralized Cloud Data Center (CDC)
    corecore