1,892 research outputs found

    Joint Resource Allocation and Coordinated Computation Offloading for Fog Radio Access Networks

    Get PDF
    The cloud radio access network (C-RAN) and the fog computing have been recently proposed to tackle the dramatically increasing traffic demands and to provide better quality of service (QoS) to user equipment (UE). Considering the better computation capability of the cloud RAN (10 times larger than that of the fog RAN) and the lower transmission delay of the fog computing, we propose a joint resource allocation and coordinated computation offloading algorithm for the fog RAN (F-RAN), which takes the advantage of C-RAN and fog computing. Specifically, the F-RAN splits a computation task into the fog computing part and the cloud computing part. Based on the constraints of maximum transmission delay tolerance, fronthaul and backhaul capacity limits, we minimize the energy cost and obtain optimal computational resource allocation for multiple UE, transmission power allocation of each UE and the event splitting factor. Numerical results have been proposed with the comparison of existing methods

    Energy Efficient and Low-Latency Communications for Future Wireless Networks

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.The ever-growing number of smart and mobile devices as well as their emerging applications call for novel solutions to address new challenges in energy efficiency and latency requirements. This thesis aims to develop novel protocols, resource allocation algorithms, and network architectures to enable low-latency services for mobile devices and applications (e.g., mission-critical applications in intelligent transportation systems, healthcare, gaming, and virtual/augmented reality applications). Specifically, we first introduce proactive resource allocation approaches to reduce the communications delay in machine type communications. Exploiting the correlation between smart devices (e.g., sensors), we propose an algorithm to proactively allocate uplink resources for these devices, and thereby reducing the expected uplink delay. Second, to address the energy efficiency problem for hardware-constrained devices, we propose a multi-tier task-offloading network architecture. In this novel network architecture, computation tasks from these devices can be offloaded to a network of computation-aiding servers or fog/edge nodes to minimize the energy consumption subject to the delay constraints of services. Because computing resources on fog nodes are usually limited, while task offloading demands from user devices are high, we develop an unprecedented model, allowing fog nodes and a powerful cloud server to collaborate to meet all tasks' requirements. Our experimental results demonstrate that the proposed solution can attain the optimal energy efficiency while meeting strict latency requirements for all devices and computing tasks. Finally, to address the fairness in allocating communication and computation resources of heterogeneous fog nodes for mobile devices considering diverse requirements (i.e., delay, security, and application compatibility), we adopt the proportional fairness criterion to develop a joint task offloading and resource allocation solution. The experimental results (i.e., fairness indexes, energy benefit, and energy consumption) show that the proposed scheme can attain the maximum proportional fairness in terms of the energy benefit (from offloading to fog nodes)

    Joint Data compression and Computation offloading in Hierarchical Fog-Cloud Systems

    Get PDF
    Data compression has the potential to significantly improve the computation offloading performance in hierarchical fog-cloud systems. However, it remains unknown how to optimally determine the compression ratio jointly with the computation offloading decisions and the resource allocation. This joint optimization problem is studied in the current paper where we aim to minimize the maximum weighted energy and service delay cost (WEDC) of all users. First, we consider a scenario where data compression is performed only at the mobile users. We prove that the optimal offloading decisions have a threshold structure. Moreover, a novel three-step approach employing convexification techniques is developed to optimize the compression ratios and the resource allocation. Then, we address the more general design where data compression is performed at both the mobile users and the fog server. We propose three efficient algorithms to overcome the strong coupling between the offloading decisions and resource allocation. We show that the proposed optimal algorithm for data compression at only the mobile users can reduce the WEDC by a few hundred percent compared to computation offloading strategies that do not leverage data compression or use sub-optimal optimization approaches. Besides, the proposed algorithms for additional data compression at the fog server can further reduce the WEDC

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    VirtFogSim: A parallel toolbox for dynamic energy-delay performance testing and optimization of 5G Mobile-Fog-Cloud virtualized platforms

    Get PDF
    It is expected that the pervasive deployment of multi-tier 5G-supported Mobile-Fog-Cloudtechnological computing platforms will constitute an effective means to support the real-time execution of future Internet applications by resource- and energy-limited mobile devices. Increasing interest in this emerging networking-computing technology demands the optimization and performance evaluation of several parts of the underlying infrastructures. However, field trials are challenging due to their operational costs, and in every case, the obtained results could be difficult to repeat and customize. These emergingMobile-Fog-Cloud ecosystems still lack, indeed, customizable software tools for the performance simulation of their computing-networking building blocks. Motivated by these considerations, in this contribution, we present VirtFogSim. It is aMATLAB-supported software toolbox that allows the dynamic joint optimization and tracking of the energy and delay performance of Mobile-Fog-Cloud systems for the execution of applications described by general Directed Application Graphs (DAGs). In a nutshell, the main peculiar features of the proposed VirtFogSim toolbox are that: (i) it allows the joint dynamic energy-aware optimization of the placement of the application tasks and the allocation of the needed computing-networking resources under hard constraints on acceptable overall execution times, (ii) it allows the repeatable and customizable simulation of the resulting energy-delay performance of the overall system; (iii) it allows the dynamic tracking of the performed resource allocation under time-varying operational environments, as those typically featuring mobile applications; (iv) it is equipped with a user-friendly Graphic User Interface (GUI) that supports a number of graphic formats for data rendering, and (v) itsMATLAB code is optimized for running atop multi-core parallel execution platforms. To check both the actual optimization and scalability capabilities of the VirtFogSim toolbox, a number of experimental setups featuring different use cases and operational environments are simulated, and their performances are compared
    corecore