477 research outputs found
A survey on mobility-induced service migration in the fog, edge, and related computing paradigms
The final publication is available at ACM via http://dx.doi.org/10.1145/3326540With the advent of fog and edge computing paradigms, computation capabilities have been moved toward the edge of the network to support the requirements of highly demanding services. To ensure that the quality of such services is still met in the event of users’ mobility, migrating services across different computing nodes becomes essential. Several studies have emerged recently to address service migration in different edge-centric research areas, including fog computing, multi-access edge computing (MEC), cloudlets, and vehicular clouds. Since existing surveys in this area focus on either VM migration in general or migration in a single research field (e.g., MEC), the objective of this survey is to bring together studies from different, yet related, edge-centric research fields while capturing the different facets they addressed. More specifically, we examine the diversity characterizing the landscape of migration scenarios at the edge, present an objective-driven taxonomy of the literature, and highlight contributions that rather focused on architectural design and implementation. Finally, we identify a list of gaps and research opportunities based on the observation of the current state of the literature. One such opportunity lies in joining efforts from both networking and computing research communities to facilitate future research in this area.Peer ReviewedPreprin
Recommended from our members
Modeling industry 4.0 based fog computing environments for application analysis and deployment
The extension of the Cloud to the Edge of the network through Fog Computing can have a significant impact on the reliability and latencies of deployed applications. Recent papers have suggested a shift from VM and Container based deployments to a shared environment among applications to better utilize resources. Unfortunately, the existing deployment and optimization methods pay little attention to developing and identifying complete models to such systems which may cause large inaccuracies between simulated and physical run-time parameters. Existing models do not account for application interdependence or the locality of application resources which causes extra communication and processing delays. This paper addresses these issues by carrying out experiments in both cloud and edge systems with various scales and applications. It analyses the outcomes to derive a new reference model with data driven parameter formulations and representations to help understand the effect of migration on these systems. As a result, we can have a more complete characterization of the fog environment. This, together with tailored optimization methods than can handle the heterogeneity and scale of the fog can improve the overall system run-time parameters and improve constraint satisfaction. An Industry 4.0 based case study with different scenarios was used to analyze and validate the effectiveness of the proposed model. Tests were deployed on physical and virtual environments with different scales. The advantages of the model based optimization methods were validated in real physical environments. Based on these tests, we have found that our model is 90% accurate on load and delay predictions for application deployments in both cloud and edge
Cost and availability aware resource allocation and virtual function placement for CDNaaS provision
We address the fundamental tradeoff between deployment cost and service availability in the context of on-demand content delivery service provision over a telecom operator's network functions virtualization infrastructure. In particular, given a specific set of preferences and constraints with respect to deployment cost, availability and computing resource capacity, we provide polynomial-time heuristics for the problem of jointly deriving an appropriate assignment of computing resources to a set of virtual instances and the placement of the latter in a subset of the available physical hosts. We capture the conflicting criteria of service availability and deployment cost by proposing a multi-objective optimization problem formulation. Our algorithms are experimentally shown to outperform state-of-the-art solutions in terms of both execution time and optimality, while providing the system operator with the necessary flexibility to balance between conflicting objectives and reflect the relevant preferences of the customer in the produced solutions.This work was supported in part by the French FUI-18 DVD2C project and by the European Union’s Horizon 2020 research and innovation program under the 5G-Transformer project (grant no. 761536)
Presenting Migration Scheduling Algorithm for Virtual Machines to Optimize Energy Consumption Simultaneously and Generate Pollutants in super Computing Networks
In this research, we focus on the migration of virtual machines in the cloud data center using the inheritance algorithm. The simulation results confirm the feasibility and effectiveness of this timing algorithm and lead to a significant reduction in total energy consumption compared to other strategies. And since our focus is on the operational energy of the centers and with reduction in operational energy, producing Carbon biodegradation has also declined, which plays a significant role in reducing user costs. Keywords: migration, car, super computing energy consumption. DOI: 10.7176/CEIS/10-6-03 Publication date:July 31st 201
Software-Defined Networks for Future Networks and Services: Main Technical Challenges and Business Implications
In 2013, the IEEE Future Directions Committee (FDC) formed an SDN work group to explore the amount of interest in forming an IEEE Software-Defined Network (SDN) Community. To this end, a Workshop on "SDN for Future Networks and Services" (SDN4FNS'13) was organized in Trento, Italy (Nov. 11th-13th 2013). Following the results of the workshop, in this paper, we have further analyzed scenarios, prior-art, state of standardization, and further discussed the main technical challenges and socio-economic aspects of SDN and virtualization in future networks and services. A number of research and development directions have been identified in this white paper, along with a comprehensive analysis of the technical feasibility and business availability of those fundamental technologies. A radical industry transition towards the "economy of information through softwarization" is expected in the near future
A multi-criteria decision making approach for scaling and placement of virtual network functions
This paper investigates the joint scaling and placement problem of network services made up of virtual network functions (VNFs) that can be provided inside a cluster managing multiple points of presence (PoPs). Aiming at increasing the VNF service satisfaction rates and minimizing the deployment cost, we use both transport and cloud-aware VNF scaling as well as multi-attribute decision making (MADM) algorithms for VNF placement inside the cluster. The original joint scaling and placement problem is known to be NP-hard and hence the problem is solved by separating scaling and placement problems and solving them individually. The experiments are done using a dataset containing the information of a deployed digital-twin network service. These experiments show that considering transport and cloud parameters during scaling and placement algorithms perform more efficiently than the only cloud based or transport based scaling followed by placement algorithms. One of the MADM algorithms, Total Order Preference by Similarity to the Ideal Solution (TOPSIS), has shown to yield the lowest deployment cost and highest VNF request satisfaction rates compared to only transport or cloud scaling and other investigated MADM algorithms. Our simulation results indicate that considering both transport and cloud parameters in various availability scenarios of cloud and transport resources has significant potential to provide increased request satisfaction rates when VNF scaling and placement using the TOPSIS scheme is performed.This work was partially funded by EC H2020 5GPPP 5Growth Project (Grant 856709), Spanish MINECO Grant TEC2017-88373-R (5G-REFINE), Generalitat de Catalunya Grant 2017 SGR 1195 and the National Program on Equipment and Scientifc and Technical Infrastructure, EQC2018-005257-P under the European Regional Development Fund (FEDER). We would also like to thank Milan Groshev, Carlos GuimarĂŁes for providing dataset for scaling of robot manipulator based digital twin service
Energy-efficient Nature-Inspired techniques in Cloud computing datacenters
Cloud computing is a systematic delivery of computing resources as services to the consumers via the Internet. Infrastructure
as a Service (IaaS) is the capability provided to the consumer by enabling smarter access to the processing, storage,
networks, and other fundamental computing resources, where the consumer can deploy and run arbitrary software including
operating systems and applications. The resources are sometimes available in the form of Virtual Machines (VMs). Cloud
services are provided to the consumers based on the demand, and are billed accordingly. Usually, the VMs run on various
datacenters, which comprise of several computing resources consuming lots of energy resulting in hazardous level of carbon
emissions into the atmosphere. Several researchers have proposed various energy-efficient methods for reducing the energy
consumption in datacenters. One such solutions are the Nature-Inspired algorithms. Towards this end, this paper presents a
comprehensive review of the state-of-the-art Nature-Inspired algorithms suggested for solving the energy issues in the Cloud
datacenters. A taxonomy is followed focusing on three key dimension in the literature including virtualization, consolidation,
and energy-awareness. A qualitative review of each techniques is carried out considering key goal, method, advantages, and
limitations. The Nature-Inspired algorithms are compared based on their features to indicate their utilization of resources
and their level of energy-efficiency. Finally, potential research directions are identified in energy optimization in data centers.
This review enable the researchers and professionals in Cloud computing datacenters in understanding literature evolution
towards to exploring better energy-efficient methods for Cloud computing datacenters
- …