14,087 research outputs found
Energy Efficient Resource Allocation for Demand Intensive Applications in a VLC Based Fog Architecture
In this paper, we propose an energy efficient passive optical network (PON)
architecture for backhaul connectivity in indoor visible light communication
(VLC) systems. The proposed network is used to support a fog computing
architecture designed to allow users with processing demands to access
dedicated fog nodes and idle processing resources in other user devices (UDs)
within the same building. The fog resources within a building complement fog
nodes at the access and metro networks and the central cloud data center. A
mixed integer linear programming (MILP) model is developed to minimize the
total power consumption associated with serving demands over the proposed
architecture. A scenario that considers applications with intensive demands is
examined to evaluate the energy efficiency of the proposed architecture. A
comparison is conducted between allocating the demands in the fog nodes and
serving the demands in the conventional cloud data center. Additionally, the
proposed architecture is compared with an architecture based on state-of-art
Spine-and-Leaf (SL) connectivity. Relative to the SL architecture and serving
all the demands in the cloud, the adoption of the PON-based architecture
achieves 84% and 86% reductions, respectively.Comment: arXiv admin note: substantial text overlap with arXiv:2203.1138
D2D Communications for Large-Scale Fog Platforms: Enabling Direct M2M Interactions
To many, fog computing is considered the next step beyond the current centralized cloud that will support the forthcoming Internet of Things (IoT) revolution. While IoT devices will still communicate with applications running in the cloud, localized fog clusters will appear with IoT devices communicating with application logic running on a proximate fog node. This will add proximity-based machine-to-machine (M2M) communications to standard cloud-computing traffic, and it calls for efficient mobility management for entire fog clusters and energy-efficient communication within them. In this context, long-term evolution-advanced (LTE-A) technology is expected to play a major role as a communication infrastructure that guarantees low deployment costs, native mobility support, and plug-and-play seamless configuration.
We investigate the role of LTE-A in future large-scale IoT systems. In particular, we analyze how the recently
standardized device-to-device (D2D) communication mode can be exploited to effectively enable direct M2M
interactions within fog clusters, and we assess the expected benefits in terms of network resources and
energy consumption. Moreover, we show how the fog-cluster architecture, and its localized-communication
paradigm, can be leveraged to devise enhanced mobility management, building on what LTE-A already has to offer
Energy Efficient Software Matching in Distributed Vehicular Fog Based Architecture with Cloud and Fixed Fog Nodes
The rapid development of vehicles on-board units and the proliferation of autonomous vehicles in modrn cities create a potential for a new fog computing paradigm, referred to as vehicular fog computing (VFC). In this paper, we propose an architecture that integrates a vehicular fog (VF) composed of vehicles clustered in a parking lot with a fixed fog node at the access network and the central cloud. We investigate the problem of energy efficient software matching in the VF considering different approaches to deploy software packages in vehicles
Energy Efficient Virtual Machines Placement Over Cloud-Fog Network Architecture
Fog computing is an emerging paradigm that aims to improve the efficiency and QoS of cloud computing by extending the cloud to the edge of the network. This paper develops a comprehensive energy efficiency analysis framework based on mathematical modeling and heuristics to study the offloading of virtual machine (VM) services from the cloud to the fog. The analysis addresses the impact of different factors including the traffic between the VM and its users, the VM workload, the workload versus number of users profile and the proximity of fog nodes to users. Overall, the power consumption can be reduced if the VM users’ traffic is high and/or the VMs have a linear power profile. In such a linear profile case, the creation of multiple VM replicas does not increase the computing power consumption significantly (there may be a slight increase due to idle / baseline power consumption) if the number of users remains constant, however the VM replicas can be brought closer to the end users, thus reducing the transport network power consumption. In our scenario, the optimum placement of VMs over a cloud-fog architecture significantly decreased the total power consumption by 56% and 64% under high user data rates compared to optimized distributed clouds placement and placement in the existing AT&T network cloud locations, respectively
- …