209 research outputs found

    Optimal Blind and Adaptive Fog Orchestration under Local Processor Sharing

    Get PDF
    International audienceThis paper studies the tradeoff between running cost and processing delay in order to optimally orchestrate multiple fog applications. Fog applications process batches of objects' data along chains of containerised microservice modules, which can run either for free on a local fog server or run in cloud at a cost. Processor sharing techniques, in turn, affect the applications' processing delay on a local edge server depending on the number of application modules running on the same server. The fog orchestrator copes with local server congestion by offloading part of computation to the cloud trading off processing delay for a finite budget. Such problem can be described in a convex optimisation framework valid for a large class of processor sharing techniques. The optimal solution is in threshold form and depends solely on the order induced by the marginal delays of N fog applications. This reduces the original multidimensional problem to an unidimensional one which can be solved in O(N 2) by a parallelised search algorithm under complete system information. Finally, an online learning procedure based on a primal-dual stochastic approximation algorithm is designed in order to drive optimal reconfiguration decisions in the dark, by requiring only the unbiased estimation of the marginal delays. Extensive numerical results characterise the structure of the optimal solution, the system performance and the advantage attained with respect to baseline algorithmic solutions

    DR-Cache: Distributed Resilient Caching with Latency Guarantees

    Get PDF
    The dominant application in today’s Internet is content streaming, which is increasingly relying on caches to meet the stringent conditions on the latency between content servers and end-users. These systems routinely face the challenges of limited bandwidth capacities and network server failures, which degrade caching performance. In this paper, we study the problem of optimally allocating content over a resilient caching network, in which each cache may fail under some situations. Given content request rates and multiple routing paths, we formulate an optimization problem to maximize the expected caching gain, i.e., the reduction of latency due to intermediate caching. The offline version of this problem is NP-hard. We first propose a centralized, offline algorithm and show that a solution with (1-1/e) approximation ratio to the optimal can be constructed. We then propose a distributed ascent algorithm based on the concave relaxation of the expected gain. Informed by the results of our analysis, we finally propose a distributed resilient caching algorithm (DR-Cache) that is simple and adaptive to network failures. We show numerically that DR-Cache significantly outperforms other candidate algorithms under synthetic requests, as well as real world traces over a class of network topologies
    corecore