53 research outputs found

    Fog Device-as-a-Service (FDaaS): A Framework for Service Deployment in Public Fog Environments

    Full text link
    Meeting the requirements of future services with time sensitivity and handling sudden load spikes of the services in Fog computing environments are challenging tasks due to the lack of publicly available Fog nodes and their characteristics. Researchers have assumed that the traditional autoscaling techniques, with lightweight virtualisation technology (containers), can be used to provide autoscaling features in Fog computing environments, few researchers have built the platform by exploiting the default autoscaling techniques of the containerisation orchestration tools or systems. However, the adoption of these techniques alone, in a publicly available Fog infrastructure, does not guarantee Quality of Service (QoS) due to the heterogeneity of Fog devices and their characteristics, such as frequent resource changes and high mobility. To tackle this challenge, in this work we developed a Fog as a Service (FaaS) framework that can create, configure and manage the containers which are running on the Fog devices to deploy services. This work presents the key techniques and algorithms which are responsible for handling sudden load spikes of the services to meet the QoS of the application. This work provides an evaluation by comparing it with existing techniques under real scenarios. The experiment results show that our proposed approach maximises the satisfied service requests by an average of 1.9 times in different scenarios.Comment: 10 Pages, 13 Figure

    Experimental Study and Performance Analysis of Cloud Computing Architectures for Industrial Control Systems

    Get PDF
    This thesis proposes an Open-Source Cloud Computing Infrastructure (OpenStack) based cloud computing architecture for industrial control systems called OpenStack-supported virtualized controller. The underlying virtualization technology is QEMU and Real-Time Kernel-based Virtual Machine (KVM-rt). After literature research, practical integration, and systematic experiments and evaluation, the feasibility of the OpenStack-supported virtualized controller has been verified. During the verification, the OpenStack-supported virtualized controller's Key Performance Indicator (KPI) is the control-loop latency. The communication between the OpenStack-supported virtualized controller and the control target is carried over a User Datagram Protocol (UDP) based industrial control protocol called Network Variables. Both wired networks (e.g., Industrial Ethernet) and wireless networks (e.g., Wi-Fi 6) between the OpenStack-supported virtualized controller and the control target are covered. After analysis of the experiment results, three factors that could significantly impact the performance of the OpenStack0supported virtualized controller have been identified. They are the network medium, the number of the Virtual Central Processing Units (vCPUs ) of OpenStack Virtual Machine (VM), and the cycle time set for the OpenStack-supported virtualized controller. Furthermore, a more advanced architecture than the OpenStack-supported virtualized controller has been foreseen. More specifically, it is an OpenStack and Kubernetes-based cloud computing architecture called OpenStack-supported containerized controller in this thesis. Both virtualization and containerization technologies are applied to the OpenStack-supported containerized controller. The virtualization components are QEMU and KVM-rt, and the containerization tool is Docker Engine. As the software Programmable Logic Controller (PLC) used in this thesis does not officially support containerization, some strategies have been used to bypass the restrictions. Rough experiments have been conducted to verify the feasibility of the OpenStack-supported containerized controller. Similar to the OpenStack-supported virtualized controller, the KPI is the control-loop latency. The communication between the OpenStack-supported containerized controller and the control target is carried over a UDP based industrial control protocol called Network Variables. Both wired networks (e.g., Industrial Ethernet) and wireless networks (e.g., Wi-Fi 6) between the OpenStack-supported containerized controller and the control target are covered. The experiment results have confirmed the feasibility of applying containerization to industrial control systems. Thus, the OpenStack-supported containerized controller could be put into practice in the future once the software PLC officially supports containerization

    Engineering and Experimentally Benchmarking a Container-based Edge Computing System

    Full text link
    While edge computing is envisioned to superbly serve latency sensitive applications, the implementation-based studies benchmarking its performance are few and far between. To address this gap, we engineer a modular edge cloud computing system architecture that is built on latest advances in containerization techniques, including Kafka, for data streaming, Docker, as application platform, and Firebase Cloud, as realtime database system. We benchmark the performance of the system in terms of scalability, resource utilization and latency by comparing three scenarios: cloud-only, edge-only and combined edge-cloud. The measurements show that edge-only solution outperforms other scenarios only when deployed with data located at one edge only, i.e., without edge computing wide data synchronization. In case of applications requiring data synchronization through the cloud, edge-cloud scales around a factor 10 times better than cloud-only, until certain number of concurrent users in the system, and above this point, cloud-only scales better. In terms of resource utilization, we observe that whereas the mean utilization increases linearly with the number of user requests, the maximum values for the memory and the network I/O heavily increase when with an increasing amount of data

    The Fog Makes Sense: Enabling Social Sensing Services With Limited Internet Connectivity

    Full text link
    Social sensing services use humans as sensor carriers, sensor operators and sensors themselves in order to provide situation-awareness to applications. This promises to provide a multitude of benefits to the users, for example in the management of natural disasters or in community empowerment. However, current social sensing services depend on Internet connectivity since the services are deployed on central Cloud platforms. In many circumstances, Internet connectivity is constrained, for instance when a natural disaster causes Internet outages or when people do not have Internet access due to economical reasons. In this paper, we propose the emerging Fog Computing infrastructure to become a key-enabler of social sensing services in situations of constrained Internet connectivity. To this end, we develop a generic architecture and API of Fog-enabled social sensing services. We exemplify the usage of the proposed social sensing architecture on a number of concrete use cases from two different scenarios.Comment: Ruben Mayer, Harshit Gupta, Enrique Saurez, and Umakishore Ramachandran. 2017. The Fog Makes Sense: Enabling Social Sensing Services With Limited Internet Connectivity. In Proceedings of The 2nd International Workshop on Social Sensing, Pittsburgh, PA, USA, April 21 2017 (SocialSens'17), 6 page

    Fast Docker Container Deployment in Fog Computing infrastructures

    Get PDF
    I contenitori software, meglio noti come container, realizzano ambienti virtuali in cui molteplici applicazioni possono eseguire senza il rischio di interferire fra di loro. L'efficienza e la semplicità dell'approccio hanno contribuito al forte incremento della popolarità dei contaier, e, tra le varie implementazioni disponibili, Docker è di gran lunga quella più diffusa. Sfortunatamente, a causa delle loro grandi dimensioni, il processo di deployment di un container da un registro remoto verso una macchina in locale tende a richiedere tempi lunghi. La lentezza di questa operazione è particolarmente svantaggiosa in un'architettura Fog computing, dove i servizi devono muoversi da un nodo all'altro in risposta alla mobilità degli utenti. Tra l'altro, l'impiego di server a basse prestazioni tipico di tale paradigma rischia di aggravare ulteriormente i ritardi. Questa tesi presenta FogDocker, un sistema che propone un approccio originale all'operazione di download delle immagini Docker con l'obiettivo di ridurre il tempo necessario per avviare un container. L'idea centrale del lavoro è di scaricare soltanto il contenuto essenziale per l'esecuzione del container e procedere immediatamente con l'avvio; poi, in un secondo momento, mentre l'applicazione è già al lavoro, il sistema può proseguire col recupero della restante parte dell'immagine. I risultati sperimentali confermano come FogDocker sia in grado di raggiungere una riduzione notevole del tempo necessario per avviare un container. Tale ottimizzazione si rivela essere particolarmente marcata quando applicata in un contesto a risorse computazionali limitate. I risultati ottenuti dal nostro sistema promettono di agevolare l'adozione dei software container nelle architetture di Fog computing, dove la rapidità di deployment è un fattore di vitale importanza

    Distributed Computing Framework Based on Software Containers for Heterogeneous Embedded Devices

    Get PDF
    The Internet of Things (IoT) is represented by millions of everyday objects enhanced with sensing and actuation capabilities that are connected to the Internet. Traditional approaches for IoT applications involve sending data to cloud servers for processing and storage, and then relaying commands back to devices. However, this approach is no longer feasible due to the rapid growth of IoT in the network: the vast amount of devices causes congestion; latency and security requirements demand that data is processed close to the devices that produce and consume it; and the processing and storage resources of devices remain underutilized. Fog Computing has emerged as a new paradigm where multiple end-devices form a shared pool of resources where distributed applications are deployed, taking advantage of local capabilities. These devices are highly heterogeneous, with varying hardware and software platforms. They are also resource-constrained, with limited availability of processing and storage resources. Realizing the Fog requires a software framework that simplifies the deployment of distributed applications, while at the same time overcoming these constraints. In Cloud-based deployments, software containers provide a lightweight solution to simplify the deployment of distributed applications. However, Cloud hardware is mostly homogeneous and abundant in resources. This work establishes the feasibility of using Docker Swarm -- an existing container-based software framework -- for the deployment of distributed applications on IoT devices. This is realized with the use of custom tools to enable minimal-size applications compatible with heterogeneous devices; automatic configuration and formation of device Fog; remote management and provisioning of devices. The proposed framework has significant advantages over the state of the art, namely, it supports Fog-based distributed applications, it overcomes device heterogeneity and it simplifies device initialization

    Docker Swarmin soveltaminen reunalaskennan ohjelmistojen hallinnoinnissa

    Get PDF
    Reunalaskennan tarkoituksena on siirtää tiedonkäsittelyä lähemmäs tiedon lähdettä, sillä keskitettyjen palvelinten laskentakyky ei riitä tulevaisuudessa kaiken tiedon samanaikaiseen analysointiin. Esineiden internet on yksi reunalaskennan käyttötapauksista. Reunalaskennan järjestelmät ovat melko monimutkaisia ja vaativat yhä enemmän ketterien DevOps-käytäntöjen soveltamista. Näiden käytäntöjen toteuttamiseen on löydettävä sopivia teknologioita. Ensimmäiseksi tutkimuskysymykseksi asetettiin: Millaisia teknisiä ratkaisuja reunalaskennan sovellusten toimittamiseen on sovellettu? Tähän vastattiin tarkastelemalla teollisuuden, eli pilvipalveluntarjoajien ratkaisuja. Teknisistä ratkaisuista paljastui, että reunalaskennan sovellusten toimittamisen välineenä käytetään joko kontteja tai pakattuja hakemistoja. Reunan ja palvelimen väliseen kommunikointiin hyödynnettiin kevyitä tietoliikenneprotokollia tai VPN-yhteyttä. Kirjallisuuskatsauksessa konttiklusterit todettiin mahdolliseksi hallinnoinnin välineeksi reunalaskennassa. Ensimmäisen tutkimuskysymyksen tuloksista johdettiin toinen tutkimuskysymys: Voiko Docker Swarmia hyödyntää reunalaskennan sovellusten operoinnissa? Kysymykseen vastattiin empiirisellä tapaustutkimuksella. Keskitetty reunalaskennan sovellusten toimittamisen prosessi rakennettiin Docker Swarm -konttiklusteriohjelmistoa, pilvipalvelimia ja Raspberry Pi -korttitietokoneita hyödyntäen. Toimittamisen lisäksi huomioitiin ohjelmistojen suorituksenaikainen valvonta, edellisen ohjelmistoversion palautus, klusterin laitteiden ryhmittäminen, fyysisten lisälaitteiden liittäminen ja erilaisten suoritinarkkitehtuurien mahdollisuus. Tulokset osoittivat, että Docker Swarmia voidaan hyödyntää sellaisenaan reunalaskennan ohjelmistojen hallinnointiin. Docker Swarm soveltuu toimittamiseen, valvontaan, edellisen version palauttamiseen ja ryhmittämiseen. Lisäksi sen avulla voi luoda samaa ohjelmistoa suorittavia klustereita, jotka koostuvat arkkitehtuuriltaan erilaisista suorittimista. Docker Swarm osoittautui kuitenkin sopimattomaksi reunalaitteeseen kytkettyjen lisälaitteiden ohjaamiseen. Teollisuuden tarjoamien reunalaskennan ratkaisujen runsas määrä osoitti laajaa kiinnostusta konttien käytännön soveltamiseen. Tämän tutkimuksen perusteella erityisesti konttiklusterit osoittautuivat lupaavaksi teknologiaksi reunalaskennan sovellusten hallinnointiin. Lisänäytön saamiseksi on tarpeen tehdä laajempia empiirisiä jatkotutkimuksia samankaltaisia puitteita käyttäen
    corecore