3 research outputs found

    Differentiated service/data migration for edge services leveraging container characteristics

    No full text
    The Multi-access Edge Computing (MEC) and Fog Computing paradigms are enabling the opportunity to have middleboxes either statically or dynamically deployed at network edges acting as local proxies with virtualized resources for supporting and enhancing service provisioning in edge localities. However, migration of edge-enabled services poses significant challenges in the edge computing environment. In this paper, we propose an edge computing platform architecture that supports service migration with different options of granularity (either entire service/data migration, or proactive application-aware data migration) across heterogeneous edge devices (either MEC-based servers or resource-poor Fog devices) that host virtualized resources (Docker Containers). The most innovative elements of the technical contribution of our work include i) the possibility to select either an application-agnostic or an application-aware approach, ii) the possibility to choose the appropriate application-aware approach (e.g., based on data access frequencies), iii) an automatic edge services placement support with the aim of finding a more effective placement with low energy consumption, and iv) the in-lab experimentation of the performance achieved over rapidly deployable environments with resource-limited edges such as Raspberry Pi devices

    Edge Computing for Extreme Reliability and Scalability

    Get PDF
    The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud
    corecore