1,218 research outputs found

    Towards delay-aware container-based Service Function Chaining in Fog Computing

    Get PDF
    Recently, the fifth-generation mobile network (5G) is getting significant attention. Empowered by Network Function Virtualization (NFV), 5G networks aim to support diverse services coming from different business verticals (e.g. Smart Cities, Automotive, etc). To fully leverage on NFV, services must be connected in a specific order forming a Service Function Chain (SFC). SFCs allow mobile operators to benefit from the high flexibility and low operational costs introduced by network softwarization. Additionally, Cloud computing is evolving towards a distributed paradigm called Fog Computing, which aims to provide a distributed cloud infrastructure by placing computational resources close to end-users. However, most SFC research only focuses on Multi-access Edge Computing (MEC) use cases where mobile operators aim to deploy services close to end-users. Bi-directional communication between Edges and Cloud are not considered in MEC, which in contrast is highly important in a Fog environment as in distributed anomaly detection services. Therefore, in this paper, we propose an SFC controller to optimize the placement of service chains in Fog environments, specifically tailored for Smart City use cases. Our approach has been validated on the Kubernetes platform, an open-source orchestrator for the automatic deployment of micro-services. Our SFC controller has been implemented as an extension to the scheduling features available in Kubernetes, enabling the efficient provisioning of container-based SFCs while optimizing resource allocation and reducing the end-to-end (E2E) latency. Results show that the proposed approach can lower the network latency up to 18% for the studied use case while conserving bandwidth when compared to the default scheduling mechanism

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    A Survey on Scheduling the Task in Fog Computing Environment

    Full text link
    With the rapid increase in the Internet of Things (IoT), the amount of data produced and processed is also increased. Cloud Computing facilitates the storage, processing, and analysis of data as needed. However, cloud computing devices are located far away from the IoT devices. Fog computing has emerged as a small cloud computing paradigm that is near to the edge devices and handles the task very efficiently. Fog nodes have a small storage capability than the cloud node but it is designed and deployed near to the edge device so that request must be accessed efficiently and executes in time. In this survey paper we have investigated and analysed the main challenges and issues raised in scheduling the task in fog computing environment. To the best of our knowledge there is no comprehensive survey paper on challenges in task scheduling of fog computing paradigm. In this survey paper research is conducted from 2018 to 2021 and most of the paper selection is done from 2020-2021. Moreover, this survey paper organizes the task scheduling approaches and technically plans the identified challenges and issues. Based on the identified issues, we have highlighted the future work directions in the field of task scheduling in fog computing environment

    Microservices-based IoT Applications Scheduling in Edge and Fog Computing: A Taxonomy and Future Directions

    Full text link
    Edge and Fog computing paradigms utilise distributed, heterogeneous and resource-constrained devices at the edge of the network for efficient deployment of latency-critical and bandwidth-hungry IoT application services. Moreover, MicroService Architecture (MSA) is increasingly adopted to keep up with the rapid development and deployment needs of the fast-evolving IoT applications. Due to the fine-grained modularity of the microservices along with their independently deployable and scalable nature, MSA exhibits great potential in harnessing both Fog and Cloud resources to meet diverse QoS requirements of the IoT application services, thus giving rise to novel paradigms like Osmotic computing. However, efficient and scalable scheduling algorithms are required to utilise the said characteristics of the MSA while overcoming novel challenges introduced by the architecture. To this end, we present a comprehensive taxonomy of recent literature on microservices-based IoT applications scheduling in Edge and Fog computing environments. Furthermore, we organise multiple taxonomies to capture the main aspects of the scheduling problem, analyse and classify related works, identify research gaps within each category, and discuss future research directions.Comment: 35 pages, 10 figures, submitted to ACM Computing Survey

    Probabilistic QoS-aware Placement of VNF chains at the Edge

    Get PDF
    Deploying IoT-enabled Virtual Network Function (VNF) chains to Cloud-Edge infrastructures requires determining a placement for each VNF that satisfies all set deployment requirements as well as a software-defined routing of traffic flows between consecutive functions that meets all set communication requirements. In this article, we present a declarative solution, EdgeUsher, to the problem of how to best place VNF chains to Cloud-Edge infrastructures. EdgeUsher can determine all eligible placements for a set of VNF chains to a Cloud-Edge infrastructure so to satisfy all of their hardware, IoT, security, bandwidth, and latency requirements. It exploits probability distributions to model the dynamic variations in the available Cloud-Edge infrastructure, and to assess output eligible placements against those variations

    Latency-aware cost optimization of the service infrastructure placement in 5G networks

    Get PDF
    Under 5G use case scenarios latency is a main challenge that must be addressed, since mission critical environments are mostly delay sensitive. To achieve this goal, the service infrastructure placement optimization is needed in the interest of minimizing the delays in the service access layer. To solve this problem, this paper mathematically models the placement problem in a Fog Computing/NFV environment as a Mixed-Integer Linear Programming problem and proposes a heuristic-based solution considering 5G mobile network requirements. As a practical result, an application was developed to achieve usability and flexibility while ensuring operational applicability of the proposed methods.Postprint (published version
    corecore