51 research outputs found

    Caching Video-on-Demand in Metro and Access Fog Data Centres

    Get PDF
    This paper examines the utilization of metro fog data centres and access fog datacentres with integrated solar cells and Energy Storage Devices (ESDs) to assist cloud data centres in caching Video-on-Demand content and hence, reduce the networking power consumption. A Mixed Integer Linear Programming (MILP) model is used to optimize the delivery of the content from cloud, metro fog, or access fog datacentres. The results for a range of data centre parameters show that savings by up to 38% in the transport network power consumption can be achieved when VoD is optimally served from fully renewable-powered cloud or metro fog data centres or from access fog data centres with 250 m2 solar cells. Additional 8% savings can be achieved when using ESDs of 100 kWh capacity in the access fog data centres

    Fog-Assisted Caching Employing Solar Renewable Energy and Energy Storage Devices for Video on Demand Services

    Get PDF
    This paper examines the reduction in the non-renewable power consumption of transport networks including core, metro and access layers when Video-on-Demand (VoD) content is cached in solar-powered fog data centres with Energy Storage Devices (ESDs). The effects of considering optical bypass routing and Mixed Line Rate (MLR) in the core network, the availability of solar renewable energy in the access network, and optimising the use of ESDs were addressed. A Mixed Integer Linear Programming (MILP) model that considers the above factors was developed to optimise delivering VoD content from cloud data centres in the core network or fog data centres in the access network

    Fog-assisted Caching Employing Solar Renewable Energy for Delivering Video on Demand Service

    Get PDF
    This paper examines the reduction in brown power consumption of transport networks and data centres achieved by caching Video-on-Demand (VoD) contents in solar-powered fog data centers with Energy Storage Devices (ESDs). A Mixed Integer Linear Programming (MILP) model was utilized to optimize the delivery from cloud or fog data centres. The results reveal that for brown-powered cloud and fog data centres with same Power Usage Effectiveness (PUE), a saving by up to 77% in transport network power consumption can be achieved by delivering VoD demands from fog data centres. With fully renewable-powered cloud data centres and partially solar-powered fog data centres, savings of up to 26% can be achieved when considering 250 m2 solar cells. Additional saving by up to 14% can be achieved with ESDs of 50 kWh capacity

    Energy-Efficient Softwarized Networks: A Survey

    Full text link
    With the dynamic demands and stringent requirements of various applications, networks need to be high-performance, scalable, and adaptive to changes. Researchers and industries view network softwarization as the best enabler for the evolution of networking to tackle current and prospective challenges. Network softwarization must provide programmability and flexibility to network infrastructures and allow agile management, along with higher control for operators. While satisfying the demands and requirements of network services, energy cannot be overlooked, considering the effects on the sustainability of the environment and business. This paper discusses energy efficiency in modern and future networks with three network softwarization technologies: SDN, NFV, and NS, introduced in an energy-oriented context. With that framework in mind, we review the literature based on network scenarios, control/MANO layers, and energy-efficiency strategies. Following that, we compare the references regarding approach, evaluation method, criterion, and metric attributes to demonstrate the state-of-the-art. Last, we analyze the classified literature, summarize lessons learned, and present ten essential concerns to open discussions about future research opportunities on energy-efficient softwarized networks.Comment: Accepted draft for publication in TNSM with minor updates and editin

    Energy Efficient Distributed Processing for IoT

    Get PDF
    The number of connected objects in the Internet of Things (IoT) is growing exponentially. IoT devices are expected to number between 26 billion to 50 billion devices by 2020 and this figure can grow even further due to the production of miniaturised portable devices that are lightweight, energy and cost efficient together with the widespread use of the Internet and the added value organisations and individuals can gain from IoT devices, if their data is processed. These connected objects are expected to be used in multitudes of applications, of which, some are, highly resource intensive such as visual processing services for surveillance based object recognition applications. The sensed data requires processing by the cloud in order to extract knowledge and make decisions accordingly. Given the pervasiveness of future IoT-based visual processing applications, massive amounts of data will be collected due to the nature of multimedia files. Transporting all that collected data to the cloud at the core of the network, is prohibitively costly, in terms of energy consumption. Hence, to tackle the aforementioned challenges, distributed processing is proposed by academia and industry to make use of a large number of devices located in the edge of the network to process some or all of the data before it gets to the cloud. Due to the heterogeneity of the devices in the edge of the network, it is crucial to develop energy efficient models that take care of resource provisioning optimally. The focus in today’s network design and development has shifted towards energy efficiency, due to the rising cost of electricity, resource scarcity and increasing emission of carbon dioxide (CO2). This thesis addresses some of the challenges associated with service placement in a distributed architecture such as the fog. First, a Passive Optical Network (PON) is used to connect IoT devices and to support the fog infrastructure. A metro network is also used to connect to the fog and aggregate traffic from the PON towards the core network. An IP/WDM backbone network is considered to model the core layer and to interconnect the cloud data centres. The entire network was modelled and optimised through Mixed Integer Linear Programming (MILP) and the total end to end power consumption was jointly minimised for processing and networking. Two aspects of service placements were examined: 1) non-splitable services, and 2) splitable services. The results obtained showed that, in the capacitated problem, service splitting introduced power consumption savings of up to 86% compared to 46% with non-splitable services. Moreover, an energy efficient special purposed data centre (SP-DC) was deployed in addition to its general purpose counterpart (GP-DC). The results showed that, for very high demands, power savings of up to 50% could be achieved compared to 30% without SP-DC. The performance of the proposed architecture was further examined by considering additional dimensions to the problem of service placements such as resiliency dimension in terms of 1+1 server protection, in the long term network design problem (un-capacitated) and the impact of inter-service synchronisation overhead on the total number service splits per task

    Energy-Efficient Distributed Machine Learning in Cloud Fog Networks

    Get PDF
    Massive amounts of data are expected to be generated by the billions of objects that form the Internet of Things (IoT). A variety of automated services such as monitoring will largely depend on the use of different Machine Learning (ML) algorithms. Traditionally, ML models are processed by centralized cloud data centers, where IoT readings are offloaded to the cloud via multiple networking hops in the access, metro, and core layers. This approach will inevitably lead to excessive networking power consumptions as well as Quality-of-Service (QoS) degradation such as increased latency. Instead, in this paper, we propose a distributed ML approach where the processing can take place in intermediary devices such as IoT nodes and fog servers in addition to the cloud. We abstract the ML models into Virtual Service Requests (VSRs) to represent multiple interconnected layers of a Deep Neural Network (DNN). Using Mixed Integer Linear Programming (MILP), we design an optimization model that allocates the layers of a DNN in a Cloud/Fog Network (CFN) in an energy efficient way. We evaluate the impact of DNN input distribution on the performance of the CFN and compare the energy efficiency of this approach to the baseline where all layers of DNNs are processed in the centralized Cloud Data Center (CDC)

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities
    • …
    corecore