5 research outputs found

    Fast Docker Container Deployment in Fog Computing infrastructures

    Get PDF
    I contenitori software, meglio noti come container, realizzano ambienti virtuali in cui molteplici applicazioni possono eseguire senza il rischio di interferire fra di loro. L'efficienza e la semplicitĂ  dell'approccio hanno contribuito al forte incremento della popolaritĂ  dei contaier, e, tra le varie implementazioni disponibili, Docker Ăš di gran lunga quella piĂč diffusa. Sfortunatamente, a causa delle loro grandi dimensioni, il processo di deployment di un container da un registro remoto verso una macchina in locale tende a richiedere tempi lunghi. La lentezza di questa operazione Ăš particolarmente svantaggiosa in un'architettura Fog computing, dove i servizi devono muoversi da un nodo all'altro in risposta alla mobilitĂ  degli utenti. Tra l'altro, l'impiego di server a basse prestazioni tipico di tale paradigma rischia di aggravare ulteriormente i ritardi. Questa tesi presenta FogDocker, un sistema che propone un approccio originale all'operazione di download delle immagini Docker con l'obiettivo di ridurre il tempo necessario per avviare un container. L'idea centrale del lavoro Ăš di scaricare soltanto il contenuto essenziale per l'esecuzione del container e procedere immediatamente con l'avvio; poi, in un secondo momento, mentre l'applicazione Ăš giĂ  al lavoro, il sistema puĂČ proseguire col recupero della restante parte dell'immagine. I risultati sperimentali confermano come FogDocker sia in grado di raggiungere una riduzione notevole del tempo necessario per avviare un container. Tale ottimizzazione si rivela essere particolarmente marcata quando applicata in un contesto a risorse computazionali limitate. I risultati ottenuti dal nostro sistema promettono di agevolare l'adozione dei software container nelle architetture di Fog computing, dove la rapiditĂ  di deployment Ăš un fattore di vitale importanza

    Evaluating Container Deployment Implementations for Foglets

    Get PDF
    In recent years, the number of devices connected to local networks has rapidly expanded to create a new internet known as The Internet of Things. The applications run on these devices often require lower latency solutions than cloud computing can provide in order to perform time-sensitive interactions with other devices near the network’s edge. One solution to this problem is fog computing, a geo-distributed architecture that provides computational resources closer to the edge of the network. This proximity yields low-latency connections among such devices. In order to implement a powerful fog computing network, applications must be able to deploy and migrate quickly throughout the geo-distributed resources. In the Foglets project, containers are used to efficiently deploy applications. The Foglets project currently contains two platforms that handle container deployment: one that utilizes system calls, and another that uses the well-established Docker API. In this work, we evaluate the latency and throughput of the two deployment platforms, as well as the impact of container commands and size on these metrics. We found that while serving many simultaneous deployments through multithreading, the Docker API yields lower latency and higher throughput. We also found that the size of the container and commands run on the container had a negligible impact on the deployment’s latency and throughput.Undergraduat

    Docker Container Deployment in Fog Computing Infrastructures

    Get PDF
    International audienceThe transition from virtual machine-based infrastructures to container-based ones brings the promise of swift and efficient software deployment in large-scale computing infrastructures. However, in fog computing environments which are often made of very small computers such as Raspberry PIs, deploying even a very simple Docker container may take multiple minutes. We demonstrate that Docker makes inefficient usage of the available hardware resources, essentially using different hardware subsystems (network bandwidth, CPU, disk I/O) sequentially rather than simultaneously. We therefore propose three optimizations which, once combined, reduce container deployment times by a factor up to 4. These optimizations also speed up deployment time by about 30% in datacenter-grade servers

    Docker-pi: Docker Container Deployment in Fog Computing Infrastructures

    No full text
    International audienceThe transition from virtual machine-based infrastructures to container-based ones brings the promise of swift and efficient software deployment in large-scale computing infrastructures. However, in fog computing environments which are often made of very small computers such as Raspberry PIs, deploying even a very simple Docker container may take multiple minutes. We demonstrate that Docker makes inefficient usage of the available hardware resources, essentially using different hardware subsystems (network bandwidth, CPU, disk I/O) sequentially rather than simultaneously. We therefore propose three optimizations which, once combined, reduce container deployment times by a factor up to 4. These optimizations also speed up deployment time by about 30% in datacenter-grade servers
    corecore