524 research outputs found

    Fog Computing: A Taxonomy, Survey and Future Directions

    Full text link
    In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named "Fog computing" has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/ sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features.We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research

    VirtFogSim: A parallel toolbox for dynamic energy-delay performance testing and optimization of 5G Mobile-Fog-Cloud virtualized platforms

    Get PDF
    It is expected that the pervasive deployment of multi-tier 5G-supported Mobile-Fog-Cloudtechnological computing platforms will constitute an effective means to support the real-time execution of future Internet applications by resource- and energy-limited mobile devices. Increasing interest in this emerging networking-computing technology demands the optimization and performance evaluation of several parts of the underlying infrastructures. However, field trials are challenging due to their operational costs, and in every case, the obtained results could be difficult to repeat and customize. These emergingMobile-Fog-Cloud ecosystems still lack, indeed, customizable software tools for the performance simulation of their computing-networking building blocks. Motivated by these considerations, in this contribution, we present VirtFogSim. It is aMATLAB-supported software toolbox that allows the dynamic joint optimization and tracking of the energy and delay performance of Mobile-Fog-Cloud systems for the execution of applications described by general Directed Application Graphs (DAGs). In a nutshell, the main peculiar features of the proposed VirtFogSim toolbox are that: (i) it allows the joint dynamic energy-aware optimization of the placement of the application tasks and the allocation of the needed computing-networking resources under hard constraints on acceptable overall execution times, (ii) it allows the repeatable and customizable simulation of the resulting energy-delay performance of the overall system; (iii) it allows the dynamic tracking of the performed resource allocation under time-varying operational environments, as those typically featuring mobile applications; (iv) it is equipped with a user-friendly Graphic User Interface (GUI) that supports a number of graphic formats for data rendering, and (v) itsMATLAB code is optimized for running atop multi-core parallel execution platforms. To check both the actual optimization and scalability capabilities of the VirtFogSim toolbox, a number of experimental setups featuring different use cases and operational environments are simulated, and their performances are compared

    Fog-supported delay-constrained energy-saving live migration of VMs over multiPath TCP/IP 5G connections

    Get PDF
    The incoming era of the fifth-generation fog computing-supported radio access networks (shortly, 5G FOGRANs) aims at exploiting computing/networking resource virtualization, in order to augment the limited resources of wireless devices through the seamless live migration of virtual machines (VMs) toward nearby fog data centers. For this purpose, the bandwidths of the multiple wireless network interface cards of the wireless devices may be aggregated under the control of the emerging MultiPathTCP (MPTCP) protocol. However, due to the fading and mobility-induced phenomena, the energy consumptions of the current state-of-the-art VM migration techniques may still offset their expected benefits. Motivated by these considerations, in this paper, we analytically characterize and implement in software and numerically test the optimal minimum-energy settable-complexity bandwidth manager (SCBM) for the live migration of VMs over 5G FOGRAN MPTCP connections. The key features of the proposed SCBM are that: 1) its implementation complexity is settable on-line on the basis of the target energy consumption versus implementation complexity tradeoff; 2) it minimizes the network energy consumed by the wireless device for sustaining the migration process under hard constraints on the tolerated migration times and downtimes; and 3) by leveraging a suitably designed adaptive mechanism, it is capable to quickly react to (possibly, unpredicted) fading and/or mobility-induced abrupt changes of the wireless environment without requiring forecasting. The actual effectiveness of the proposed SCBM is supported by extensive energy versus delay performance comparisons that cover: 1) a number of heterogeneous 3G/4G/WiFi FOGRAN scenarios; 2) synthetic and real-world workloads; and, 3) MPTCP and wireless connections

    Workload allocation in mobile edge computing empowered internet of things

    Get PDF
    In the past few years, a tremendous number of smart devices and objects, such as smart phones, wearable devices, industrial and utility components, are equipped with sensors to sense the real-time physical information from the environment. Hence, Internet of Things (IoT) is introduced, where various smart devices are connected with each other via the internet and empowered with data analytics. Owing to the high volume and fast velocity of data streams generated by IoT devices, the cloud that can provision flexible and efficient computing resources is employed as a smart brain to process and store the big data generated from IoT devices. However, since the remote cloud is far from IoT users which send application requests and await the results generated by the data processing in the remote cloud, the response time of the requests may be too long, especially unbearable for delay sensitive IoT applications. Therefore, edge computing resources (e.g., cloudlets and fog nodes) which are close to IoT devices and IoT users can be employed to alleviate the traffic load in the core network and minimize the response time for IoT users. In edge computing, the communications latency critically affects the response time of IoT user requests. Owing to the dynamic distribution of IoT users (i.e., UEs), drone base station (DBS), which can be flexibly deployed for hotspot areas, can potentially improve the wireless latency of IoT users by mitigating the heavy traffic loads of macro BSs. Drone-based communications poses two major challenges: 1) the DBS should be deployed in suitable areas with heavy traffic demands to serve more UEs; 2) the traffic loads in the network should be allocated among macro BSs and DBSs to avoid instigating traffic congestions. Therefore, a TrAffic Load baLancing (TALL) scheme in such drone-assisted fog network is proposed to minimize the wireless latency of IoT users. In the scheme, the problem is decomposed into two sub-problems, two algorithms are designed to optimize the DBS placement and user association, respectively. Extensive simulations have been set up to validate the performance of the proposed scheme. Meanwhile, various IoT applications can be run in cloudlets to reduce the response time between IoT users (e.g., user equipments in mobile networks) and cloudlets. Considering the spatial and temporal dynamics of each application\u27s workloads among cloudlets, the workload allocation among cloudlets for each IoT application affects the response time of the application\u27s requests. To solve this problem, an Application awaRE workload Allocation (AREA) scheme for edge computing based IoT is designed to minimize the response time of IoT application requests by determining the destination cloudlets for each IoT user\u27s different types of requests and the amount of computing resources allocated for each application in each cloudlet. In this scheme, both the network delay and computing delay are taken into account, i.e., IoT users\u27 requests are more likely assigned to closer and lightly loaded cloudlets. The performance of the proposed scheme has been validated by extensive simulations. In addition, the latency of data flows in IoT devices consist of both the communications latency and computing latency. When some BSs and fog nodes are lightly loaded, other overloaded BSs and fog nodes may incur congestion. Thus, a workload balancing scheme in a fog network is proposed to minimize the latency of IoT data in the communications and processing procedures by associating IoT devices to suitable BSs. Furthermore, the convergence and the optimality of the proposed workload balancing scheme has been proved. Through extensive simulations, the performance of the proposed load balancing scheme is validated
    • …
    corecore