210 research outputs found

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Edge and Central Cloud Computing: A Perfect Pairing for High Energy Efficiency and Low-latency

    Get PDF
    In this paper, we study the coexistence and synergy between edge and central cloud computing in a heterogeneous cellular network (HetNet), which contains a multi-antenna macro base station (MBS), multiple multi-antenna small base stations (SBSs) and multiple single-antenna user equipment (UEs). The SBSs are empowered by edge clouds offering limited computing services for UEs, whereas the MBS provides high-performance central cloud computing services to UEs via a restricted multiple-input multiple-output (MIMO) backhaul to their associated SBSs. With processing latency constraints at the central and edge networks, we aim to minimize the system energy consumption used for task offloading and computation. The problem is formulated by jointly optimizing the cloud selection, the UEs' transmit powers, the SBSs' receive beamformers, and the SBSs' transmit covariance matrices, which is {a mixed-integer and non-convex optimization problem}. Based on methods such as decomposition approach and successive pseudoconvex approach, a tractable solution is proposed via an iterative algorithm. The simulation results show that our proposed solution can achieve great performance gain over conventional schemes using edge or central cloud alone. Also, with large-scale antennas at the MBS, the massive MIMO backhaul can significantly reduce the complexity of the proposed algorithm and obtain even better performance.Comment: Accepted in IEEE Transactions on Wireless Communication

    Mobile, collaborative augmented reality using cloudlets

    Get PDF
    The evolution in mobile applications to support advanced interactivity and demanding multimedia features is still ongoing. Novel application concepts (e.g. mobile Augmented Reality (AR)) are however hindered by the inherently limited resources available on mobile platforms (not withstanding the dramatic performance increases of mobile hardware). Offloading resource intensive application components to the cloud, also known as "cyber foraging", has proven to be a valuable solution in a variety of scenarios. However, also for collaborative scenarios, in which data together with its processing are shared between multiple users, this offloading concept is highly promising. In this paper, we investigate the challenges posed by offloading collaborative mobile applications. We present a middleware platform capable of autonomously deploying software components to minimize average CPU load, while guaranteeing smooth collaboration. As a use case, we present and evaluate a collaborative AR application, offering interaction between users, the physical environment as well as with the virtual objects superimposed on this physical environment

    Delay-aware power optimization model for mobile edge computing systems

    Get PDF
    Reducing the total power consumption and network delay are among the most interesting issues facing large-scale Mobile Cloud Computing (MCC) systems and their ability to satisfy the Service Level Agreement (SLA). Such systems utilize cloud computing infrastructure to support offloading some of user’s computationally heavy tasks to the cloud’s datacenters. However, the delay incurred by such offloading process lead the use of servers (called cloudlets) placed in the physical proximity of the users, creating what is known as Mobile Edge Computing (MEC). The cloudlet-based infrastructure has its challenges such as the limited capabilities of the cloudlet system (in terms of the ability to serve different request types from users in vast geographical regions). To cover the users demand for different types of services and in vast geographical regions, cloudlets cooperate among each other by passing user requests from one cloudlet to another. This cooperation affects both power consumption and delay. In this work, we present a mixed integer linear programming (MILP

    A Review on Computational Intelligence Techniques in Cloud and Edge Computing

    Get PDF
    Cloud computing (CC) is a centralized computing paradigm that accumulates resources centrally and provides these resources to users through Internet. Although CC holds a large number of resources, it may not be acceptable by real-time mobile applications, as it is usually far away from users geographically. On the other hand, edge computing (EC), which distributes resources to the network edge, enjoys increasing popularity in the applications with low-latency and high-reliability requirements. EC provides resources in a decentralized manner, which can respond to users’ requirements faster than the normal CC, but with limited computing capacities. As both CC and EC are resource-sensitive, several big issues arise, such as how to conduct job scheduling, resource allocation, and task offloading, which significantly influence the performance of the whole system. To tackle these issues, many optimization problems have been formulated. These optimization problems usually have complex properties, such as non-convexity and NP-hardness, which may not be addressed by the traditional convex optimization-based solutions. Computational intelligence (CI), consisting of a set of nature-inspired computational approaches, recently exhibits great potential in addressing these optimization problems in CC and EC. This article provides an overview of research problems in CC and EC and recent progresses in addressing them with the help of CI techniques. Informative discussions and future research trends are also presented, with the aim of offering insights to the readers and motivating new research directions

    Mobile cloud computing and network function virtualization for 5g systems

    Get PDF
    The recent growth of the number of smart mobile devices and the emergence of complex multimedia mobile applications have brought new challenges to the design of wireless mobile networks. The envisioned Fifth-Generation (5G) systems are equipped with different technical solutions that can accommodate the increasing demands for high date rate, latency-limited, energy-efficient and reliable mobile communication networks. Mobile Cloud Computing (MCC) is a key technology in 5G systems that enables the offloading of computationally heavy applications, such as for augmented or virtual reality, object recognition, or gaming from mobile devices to cloudlet or cloud servers, which are connected to wireless access points, either directly or through finite-capacity backhaul links. Given the battery-limited nature of mobile devices, mobile cloud computing is deemed to be an important enabler for the provision of such advanced applications. However, computational tasks offloading, and due to the variability of the communication network through which the cloud or cloudlet is accessed, may incur unpredictable energy expenditure or intolerable delay for the communications between mobile devices and the cloud or cloudlet servers. Therefore, the design of a mobile cloud computing system is investigated by jointly optimizing the allocation of radio, computational resources and backhaul resources in both uplink and downlink directions. Moreover, the users selected for cloud offloading need to have an energy consumption that is smaller than the amount required for local computing, which is achieved by means of user scheduling. Motivated by the application-centric drift of 5G systems and the advances in smart devices manufacturing technologies, new brand of mobile applications are developed that are immersive, ubiquitous and highly-collaborative in nature. For example, Augmented Reality (AR) mobile applications have inherent collaborative properties in terms of data collection in the uplink, computing at the cloud, and data delivery in the downlink. Therefore, the optimization of the shared computing and communication resources in MCC not only benefit from the joint allocation of both resources, but also can be more efficiently enhanced by sharing the offloaded data and computations among multiple users. As a result, a resource allocation approach whereby transmitted, received and processed data are shared partially among the users leads to more efficient utilization of the communication and computational resources. As a suggested architecture in 5G systems, MCC decouples the computing functionality from the platform location through the use of software virtualization to allow flexible provisioning of the provided services. Another virtualization-based technology in 5G systems is Network Function Virtualization (NFV) which prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. For that reason, the development of fault-tolerant virtualization strategies for MCC and NFV is necessary to ensure reliability of the provided services

    Optimizing task allocation for edge compute micro-clusters

    Get PDF
    There are over 30 billion devices at the network edge. This is largely driven by the unprecedented growth of the Internet-of-Things (IoT) and 5G technologies. These devices are being used in various applications and technologies, including but not limited to smart city systems, innovative agriculture management systems, and intelligent home systems. Deployment issues like networking and privacy problems dictate that computing should occur close to the data source at or near the network edge. Edge and fog computing are recent decentralised computing paradigms proposed to augment cloud services by extending computing and storage capabilities to the network’s edge to enable executing computational workloads locally. The benefits can help to solve issues such as reducing the strain on networking backhaul, improving network latency and enhancing application responsiveness. Many edge and fog computing deployment solutions and infrastructures are being employed to deliver cloud resources and services at the edge of the network — for example, cloudless and mobile edge computing. This thesis focuses on edge micro-cluster platforms for edge computing. Edge computing micro-cluster platforms are small, compact, and decentralised groups of interconnected computing resources located close to the edge of a network. These micro-clusters can typically comprise a variety of heterogeneous but resource-constrained computing resources, such as small compute nodes like Single Board Computers (SBCs), storage devices, and networking equipment deployed in local area networks such as smart home management. The goal of edge computing micro-clusters is to bring computation and data storage closer to IoT devices and sensors to improve the performance and reliability of distributed systems. Resource management and workload allocation represent a substantial challenge for such resource-limited and heterogeneous micro-clusters because of diversity in system architecture. Therefore, task allocation and workload management are complex problems in such micro-clusters. This thesis investigates the feasibility of edge micro-cluster platforms for edge computation. Specifically, the thesis examines the performance of micro-clusters to execute IoT applications. Furthermore, the thesis involves the evaluation of various optimisation techniques for task allocation and workload management in edge compute micro-cluster platforms. This thesis involves the application of various optimisation techniques, including simple heuristics-based optimisations, mathematical-based optimisation and metaheuristic optimisation techniques, to optimise task allocation problems in reconfigurable edge computing micro-clusters. The implementation and performance evaluations take place in a configured edge realistic environment using a constructed micro-cluster system comprised of a group of heterogeneous computing nodes and utilising a set of edge-relevant applications benchmark. The research overall characterises and demonstrates a feasible use case for micro-cluster platforms for edge computing environments and provides insight into the performance of various task allocation optimisation techniques for such micro-cluster systems
    • …
    corecore