24 research outputs found

    A survey on mobility-induced service migration in the fog, edge, and related computing paradigms

    Get PDF
    The final publication is available at ACM via http://dx.doi.org/10.1145/3326540With the advent of fog and edge computing paradigms, computation capabilities have been moved toward the edge of the network to support the requirements of highly demanding services. To ensure that the quality of such services is still met in the event of users’ mobility, migrating services across different computing nodes becomes essential. Several studies have emerged recently to address service migration in different edge-centric research areas, including fog computing, multi-access edge computing (MEC), cloudlets, and vehicular clouds. Since existing surveys in this area focus on either VM migration in general or migration in a single research field (e.g., MEC), the objective of this survey is to bring together studies from different, yet related, edge-centric research fields while capturing the different facets they addressed. More specifically, we examine the diversity characterizing the landscape of migration scenarios at the edge, present an objective-driven taxonomy of the literature, and highlight contributions that rather focused on architectural design and implementation. Finally, we identify a list of gaps and research opportunities based on the observation of the current state of the literature. One such opportunity lies in joining efforts from both networking and computing research communities to facilitate future research in this area.Peer ReviewedPreprin

    Proactive virtual machine migration in fog computing environments for mobile users

    Get PDF
    Orientadores: Edmundo Roberto Mauro Madeira, Luiz Fernando BittencourtDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A Computação em Névoa provê a dispositivos IoT acesso com baixa latência a recursos computacionais e de rede. Neste ambiente porém, a alta mobilidade de alguns desses dispositivos, como wearables ou dispositivos embarcados em veículos, traz grandes desafios para a alocação e gerência de recursos nesse ambiente. Manter a aplicação o mais próximo possível do usuário pode ser uma maneira de oferecer as condições necessárias para uma boa execução. A partir das diferentes demandas por recursos por parte das aplicações e da alta mobilidade dos usuários no cenário de cidades inteligentes, otimizar o processo de migração da máquina virtual que executa a aplicação do usuário nesse ambiente pode ser uma maneira de assegurar a qualidade de serviço demandada pelas aplicações. Trabalhos recentes em variados contextos têm demonstrado que adicionar dados de predição de mobilidade do usuário pode melhorar o gerenciamento de recursos computacionais. Baseado no estado atual da arte com relação a migração de máquinas virtuais em um ambiente de Computação em Névoa para usuários móveis, este trabalho propõe mecanismos baseados na predição de mobilidade para otimizar a escolha do destino da máquina virtual durante o seu processo. Simulações de cenários compatíveis com o contexto de uma Cidade Inteligente sugerem que a utilização de informações sobre o trajeto futuro do usuário pode melhorar o gerenciamento de recursos da Névoa, mantendo a máquina virtual do usuário em dispositivos de Névoa tão próximos quanto possível do trajeto do usuário. As soluções apresentadas reduzem o número de migrações necessárias durante o trajeto do usuário sem prejudicar o tempo de resposta da máquina virtual alocada na NévoaAbstract: Fog computing provides a low latency access to computational and network resources at the edge of the network for the resource-constrained Internet of Things devices. The high mobility of some of these devices, such as wearables or embedded ones in vehicles, bikes or trains, brings great challenges related to resource allocation and management. Keeping the users' applications as close as possible is a way to improve the Quality of Service desired by the users. Based on the wide different applications and high mobility of the users in a scenario of smart cities, optimizing the Virtual Machine migration process may be a way to ensure such Quality of Service requirements. Recent work in a variety of contexts has shown that adding user mobility prediction data can improve the management of computing resources. Based on the current state of the art about the virtual machine migration in a Fog Computing environment for mobile users, this work proposes mechanisms based on the user mobility prediction to optimize the choice of the destination node of the virtual machine process. Simulations of smart cities scenarios suggest that using knowledge about the future user's path can improve the resource management of Fog ecosystems, maintaining user's virtual machine in Fog devices as close as possible to the user's path. Simulations suggest that the presented policy reduces the total of migrations along the user's path without affecting the quality response time of virtual machines allocated to the FogMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE

    Cloud Computing in VANETs: Architecture, Taxonomy, and Challenges

    Get PDF
    Cloud Computing in VANETs (CC-V) has been investigated into two major themes of research including Vehicular Cloud Computing (VCC) and Vehicle using Cloud (VuC). VCC is the realization of autonomous cloud among vehicles to share their abundant resources. VuC is the efficient usage of conventional cloud by on-road vehicles via a reliable Internet connection. Recently, number of advancements have been made to address the issues and challenges in VCC and VuC. This paper qualitatively reviews CC-V with the emphasis on layered architecture, network component, taxonomy, and future challenges. Specifically, a four-layered architecture for CC-V is proposed including perception, co-ordination, artificial intelligence and smart application layers. Three network component of CC-V namely, vehicle, connection and computation are explored with their cooperative roles. A taxonomy for CC-V is presented considering major themes of research in the area including design of architecture, data dissemination, security, and applications. Related literature on each theme are critically investigated with comparative assessment of recent advances. Finally, some open research challenges are identified as future issues. The challenges are the outcome of the critical and qualitative assessment of literature on CC-V

    Computation offloading in mobile edge computing: an optimal stopping theory approach

    Get PDF
    In recent years, new mobile devices and applications with different functionalities and uses, such as drones, Autonomous Vehicles (AV) and highly advanced smartphones have emerged. Such devices are now able to launch applications such as augmented and virtual reality, intensive contextual data processing, intelligent vehicle control, traffic management, data mining and interactive applications. Although these mobile nodes have the computing and communication capabilities to run such applications, they remain unable to efficiently handle them mainly due to the significant processing required over relatively short timescales. Additionally, they consume a considerable amount of battery power. Such limitations have motivated the idea of computation offloading where computing tasks are sent to the Cloud instead of executing it locally at the mobile node. The technical concept of this idea is referred to as Mobile Cloud Computing (MCC). However, using the Cloud for computational task offloading of mobile applications introduces a significant latency and adds additional load to the radio and backhaul of the mobile networks. To cope with these challenges, the Cloud’s resources are being deployed near to the users at the Edge of the network in places such as mobile networks at the Base Station (BS), or indoor locations such as Wi-Fi and 3G/4G access points. This architecture is referred to as Mobile Edge Computing or Multi-access Edge Computing (MEC). Computation offloading in such a setting faces the challenge of deciding which time and server to offload computational tasks to. This dissertation aims at designing time-optimised task offloading decision-making algorithms in MEC environments. This will be done to find the optimal time for task offloading. The random variables that can influence the expected processing time at the MEC server are investigated using various probability distributions and representations. In the context being assessed, while the mobile node is sequentially roaming (connecting) through a set of MEC servers, it has to locally and autonomously decide which server should be used for offloading in order to perform the computing task. To deal with this sequential problem, the considered offloading decision-making is modelled as an optimal stopping time problem adopting the principles of Optimal Stopping Theory (OST). Three assessment approaches including simulation approach, real data sets and an actual implementation in real devices, are used to evaluate the performance of the models. The results indicate that OST-based offloading strategies can play an important role in optimising the task offloading decision. In particular, in the simulation approach, the average processing time achieved by the proposed models are higher than the Optimal by only 10%. In the real data set, the models are still near optimal with only 25% difference compared to the Optimal while in the real implementation, the models, most of the time, select the Optimal node for processing the task. Furthermore, the presented algorithms are lightweight, local and can hence be implemented on mobile nodes (for instance, vehicles or smart phones)

    Vehicle as a Service (VaaS): Leverage Vehicles to Build Service Networks and Capabilities for Smart Cities

    Full text link
    Smart cities demand resources for rich immersive sensing, ubiquitous communications, powerful computing, large storage, and high intelligence (SCCSI) to support various kinds of applications, such as public safety, connected and autonomous driving, smart and connected health, and smart living. At the same time, it is widely recognized that vehicles such as autonomous cars, equipped with significantly powerful SCCSI capabilities, will become ubiquitous in future smart cities. By observing the convergence of these two trends, this article advocates the use of vehicles to build a cost-effective service network, called the Vehicle as a Service (VaaS) paradigm, where vehicles empowered with SCCSI capability form a web of mobile servers and communicators to provide SCCSI services in smart cities. Towards this direction, we first examine the potential use cases in smart cities and possible upgrades required for the transition from traditional vehicular ad hoc networks (VANETs) to VaaS. Then, we will introduce the system architecture of the VaaS paradigm and discuss how it can provide SCCSI services in future smart cities, respectively. At last, we identify the open problems of this paradigm and future research directions, including architectural design, service provisioning, incentive design, and security & privacy. We expect that this paper paves the way towards developing a cost-effective and sustainable approach for building smart cities.Comment: 32 pages, 11 figure
    corecore