2 research outputs found

    Adaptive fog service placement for real-time topology changes in Kubernetes clusters

    No full text
    Recent trends have caused a shift from services deployed solely in monolithic data centers in the cloud to services deployed in the fog (e.g. roadside units for smart highways, support services for IoT devices). Simultaneously, the variety and number of IoT devices has grown rapidly, along with their reliance on cloud services. Additionally, many of these devices are now themselves capable of running containers, allowing them to execute some services previously deployed in the fog. The combination of IoT devices and fog computing has many advantages in terms of efficiency and user experience, but the scale, volatile topology and heterogeneous network conditions of the fog and the edge also present problems for service deployment scheduling. Cloud service scheduling often takes a wide array of parameters into account to calculate optimal solutions. However, the algorithms used are not generally capable of handling the scale and volatility of the fog. This paper presents a scheduling algorithm, named "Swirly", for large scale fog and edge networks, which is capable of adapting to changes in network conditions and connected devices. The algorithm details are presented and implemented as a service using the Kubernetes API. This implementation is validated and benchmarked, showing that a single threaded Swirly service is easily capable of managing service meshes for at least 300.000 devices in soft real-time

    Near real-time optimization of fog service placement for responsive edge computing

    Get PDF
    In recent years, computing workloads have shifted from the cloud to the fog, and IoT devices are becoming powerful enough to run containerized services. While the combination of IoT devices and fog computing has many advantages, such as increased efficiency, reduced network traffic and better end user experience, the scale and volatility of the fog and edge also present new problems for service deployment scheduling.Fog and edge networks contain orders of magnitude more devices than cloud data centers, and they are often less stable and slower. Additionally, frequent changes in network topology and the number of connected devices are the norm in edge networks, rather than the exception as in cloud data centers.This article presents a service scheduling algorithm, labeled "Swirly", for fog and edge networks containing hundreds of thousands of devices, which is capable of incorporating changes in network conditions and connected devices. The theoretical performance is explored, and a model of the behaviour and limits of fog nodes is constructed. An evaluation of Swirly is performed, showing that it is capable of managing service meshes for at least 300.000 devices in near real-time
    corecore