80 research outputs found

    Workload allocation in mobile edge computing empowered internet of things

    Get PDF
    In the past few years, a tremendous number of smart devices and objects, such as smart phones, wearable devices, industrial and utility components, are equipped with sensors to sense the real-time physical information from the environment. Hence, Internet of Things (IoT) is introduced, where various smart devices are connected with each other via the internet and empowered with data analytics. Owing to the high volume and fast velocity of data streams generated by IoT devices, the cloud that can provision flexible and efficient computing resources is employed as a smart brain to process and store the big data generated from IoT devices. However, since the remote cloud is far from IoT users which send application requests and await the results generated by the data processing in the remote cloud, the response time of the requests may be too long, especially unbearable for delay sensitive IoT applications. Therefore, edge computing resources (e.g., cloudlets and fog nodes) which are close to IoT devices and IoT users can be employed to alleviate the traffic load in the core network and minimize the response time for IoT users. In edge computing, the communications latency critically affects the response time of IoT user requests. Owing to the dynamic distribution of IoT users (i.e., UEs), drone base station (DBS), which can be flexibly deployed for hotspot areas, can potentially improve the wireless latency of IoT users by mitigating the heavy traffic loads of macro BSs. Drone-based communications poses two major challenges: 1) the DBS should be deployed in suitable areas with heavy traffic demands to serve more UEs; 2) the traffic loads in the network should be allocated among macro BSs and DBSs to avoid instigating traffic congestions. Therefore, a TrAffic Load baLancing (TALL) scheme in such drone-assisted fog network is proposed to minimize the wireless latency of IoT users. In the scheme, the problem is decomposed into two sub-problems, two algorithms are designed to optimize the DBS placement and user association, respectively. Extensive simulations have been set up to validate the performance of the proposed scheme. Meanwhile, various IoT applications can be run in cloudlets to reduce the response time between IoT users (e.g., user equipments in mobile networks) and cloudlets. Considering the spatial and temporal dynamics of each application\u27s workloads among cloudlets, the workload allocation among cloudlets for each IoT application affects the response time of the application\u27s requests. To solve this problem, an Application awaRE workload Allocation (AREA) scheme for edge computing based IoT is designed to minimize the response time of IoT application requests by determining the destination cloudlets for each IoT user\u27s different types of requests and the amount of computing resources allocated for each application in each cloudlet. In this scheme, both the network delay and computing delay are taken into account, i.e., IoT users\u27 requests are more likely assigned to closer and lightly loaded cloudlets. The performance of the proposed scheme has been validated by extensive simulations. In addition, the latency of data flows in IoT devices consist of both the communications latency and computing latency. When some BSs and fog nodes are lightly loaded, other overloaded BSs and fog nodes may incur congestion. Thus, a workload balancing scheme in a fog network is proposed to minimize the latency of IoT data in the communications and processing procedures by associating IoT devices to suitable BSs. Furthermore, the convergence and the optimality of the proposed workload balancing scheme has been proved. Through extensive simulations, the performance of the proposed load balancing scheme is validated

    Secure mobile edge server placement using multi-agent reinforcement learning

    Get PDF
    Funding Information: Funding: This work is supported by King Khaled University under Grant Agreement No. 6204.Peer reviewedPublisher PD

    QoS-aware service continuity in the virtualized edge

    Get PDF
    5G systems are envisioned to support numerous delay-sensitive applications such as the tactile Internet, mobile gaming, and augmented reality. Such applications impose new demands on service providers in terms of the quality of service (QoS) provided to the end-users. Achieving these demands in mobile 5G-enabled networks represent a technical and administrative challenge. One of the solutions proposed is to provide cloud computing capabilities at the edge of the network. In such vision, services are cloudified and encapsulated within the virtual machines or containers placed in cloud hosts at the network access layer. To enable ultrashort processing times and immediate service response, fast instantiation, and migration of service instances between edge nodes are mandatory to cope with the consequences of user’s mobility. This paper surveys the techniques proposed for service migration at the edge of the network. We focus on QoS-aware service instantiation and migration approaches, comparing the mechanisms followed and emphasizing their advantages and disadvantages. Then, we highlight the open research challenges still left unhandled.publishe

    Cost-Efficient NFV-Enabled Mobile Edge-Cloud for Low Latency Mobile Applications

    Get PDF
    Mobile edge-cloud (MEC) aims to support low la- tency mobile services by bringing remote cloud services nearer to mobile users. However, in order to deal with dynamic workloads, MEC is deployed in a large number of fixed-location micro- clouds, leading to resource wastage during stable/low work- load periods. Limiting the number of micro-clouds improves resource utilization and saves operational costs, but faces service performance degradations due to insufficient physical capacity during peak time from nearby micro-clouds. To efficiently support services with low latency requirement under varying workload conditions, we adopt the emerging Network Function Virtualization (NFV)-enabled MEC, which offers new flexibility in hosting MEC services in any virtualized network node, e.g., access points, routers, etc. This flexibility overcomes the limitations imposed by fixed-location solutions, providing new freedom in terms of MEC service-hosting locations. In this paper, we address the questions on where and when to allocate resources as well as how many resources to be allocated among NFV- enabled MECs, such that both the low latency requirements of mobile services and MEC cost efficiency are achieved. We propose a dynamic resource allocation framework that consists of a fast heuristic-based incremental allocation mechanism that dynamically performs resource allocation and a reoptimization algorithm that periodically adjusts allocation to maintain a near- optimal MEC operational cost over time. We show through ex- tensive simulations that our flexible framework always manages to allocate sufficient resources in time to guarantee continuous satisfaction of applications’ low latency requirements. At the same time, our proposal saves up to 33% of cost in comparison to existing fixed-location MEC solutions

    Situation-aware Edge Computing

    Get PDF
    Future wireless networks must cope with an increasing amount of data that needs to be transmitted to or from mobile devices. Furthermore, novel applications, e.g., augmented reality games or autonomous driving, require low latency and high bandwidth at the same time. To address these challenges, the paradigm of edge computing has been proposed. It brings computing closer to the users and takes advantage of the capabilities of telecommunication infrastructures, e.g., cellular base stations or wireless access points, but also of end user devices such as smartphones, wearables, and embedded systems. However, edge computing introduces its own challenges, e.g., economic and business-related questions or device mobility. Being aware of the current situation, i.e., the domain-specific interpretation of environmental information, makes it possible to develop approaches targeting these challenges. In this thesis, the novel concept of situation-aware edge computing is presented. It is divided into three areas: situation-aware infrastructure edge computing, situation-aware device edge computing, and situation-aware embedded edge computing. Therefore, the concepts of situation and situation-awareness are introduced. Furthermore, challenges are identified for each area, and corresponding solutions are presented. In the area of situation-aware infrastructure edge computing, economic and business-related challenges are addressed, since companies offering services and infrastructure edge computing facilities have to find agreements regarding the prices for allowing others to use them. In the area of situation-aware device edge computing, the main challenge is to find suitable nodes that can execute a service and to predict a node’s connection in the near future. Finally, to enable situation-aware embedded edge computing, two novel programming and data analysis approaches are presented that allow programmers to develop situation-aware applications. To show the feasibility, applicability, and importance of situation-aware edge computing, two case studies are presented. The first case study shows how situation-aware edge computing can provide services for emergency response applications, while the second case study presents an approach where network transitions can be implemented in a situation-aware manner

    SOSW: Scalable and optimal nearsighted location selection for fog node deployment and routing in SDN-based wireless networks for IoT systems

    Get PDF
    In a fog computing (FC) architecture, cloud services migrate towards the network edge and operate via edge devices such as access points (AP), routers, and switches. These devices become part of a virtualization infrastructure and are referred to as “fog nodes”. Recently, software-defined networking (SDN) has been used in FC to improve its control and manageability. The current SDN-based FC literature has overlooked two issues: (a) fog nodes’ deployment at optimal locations and (b) SDN best path computation for data flows based on constraints (i.e., end-to-end delay and link utilization). To solve these optimization problems, this paper suggests a novel approach, called scalable and optimal near-sighted location selection for fog node deployment and routing in SDN-based wireless networks for IoT systems (SOSW). First, the SOSW model uses singular-value decomposition (SVD) and QR factorization with column pivoting linear algebra methods on the traffic matrix of the network to compute the optimal locations for fog nodes, and second, it introduces a new heuristic-based traffic engineering algorithm, called the constraint-based shortest path algorithm (CSPA), which uses ant colony optimization (ACO) to optimize the path computation process for task offloading. The results show that our proposed approach significantly reduces average latency and energy consumption in comparison with existing approaches
    corecore