409 research outputs found

    Collaborative Vehicular Edge Computing Networks: Architecture Design and Research Challenges

    Get PDF
    The emergence of augmented reality (AR), autonomous driving and other new applications have greatly enriched the functionality of the vehicular networks. However, these applications usually require complex calculations and large amounts of storage, which puts tremendous pressure on traditional vehicular networks. Mobile edge computing (MEC) is proposed as a prospective technique to extend computing and storage resources to the edge of the network. Combined with MEC, the computing and storage capabilities of the vehicular network can be further enhanced. Therefore, in this paper, we explore the novel collaborative vehicular edge computing network (CVECN) architecture. We first review the work related to MEC and vehicular networks. Then we discuss the design principles of CVECN. Based on the principles, we present the detailed CVECN architecture, and introduce the corresponding functional modules, communication process, as well as the installation and deployment ideas. Furthermore, the promising technical challenges, including collaborative coalition formation, collaborative task offloading and mobility management, are presented. And some potential research issues for future research are highlighted. Finally, simulation results are verified that the proposed CVECN can significantly improve network performance

    A survey on intelligent computation offloading and pricing strategy in UAV-Enabled MEC network: Challenges and research directions

    Get PDF
    The lack of resource constraints for edge servers makes it difficult to simultaneously perform a large number of Mobile Devices’ (MDs) requests. The Mobile Network Operator (MNO) must then select how to delegate MD queries to its Mobile Edge Computing (MEC) server in order to maximize the overall benefit of admitted requests with varying latency needs. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligent (AI) can increase MNO performance because of their flexibility in deployment, high mobility of UAV, and efficiency of AI algorithms. There is a trade-off between the cost incurred by the MD and the profit received by the MNO. Intelligent computing offloading to UAV-enabled MEC, on the other hand, is a promising way to bridge the gap between MDs' limited processing resources, as well as the intelligent algorithms that are utilized for computation offloading in the UAV-MEC network and the high computing demands of upcoming applications. This study looks at some of the research on the benefits of computation offloading process in the UAV-MEC network, as well as the intelligent models that are utilized for computation offloading in the UAV-MEC network. In addition, this article examines several intelligent pricing techniques in different structures in the UAV-MEC network. Finally, this work highlights some important open research issues and future research directions of Artificial Intelligent (AI) in computation offloading and applying intelligent pricing strategies in the UAV-MEC network

    DeepBrain: Experimental Evaluation of Cloud-Based Computation Offloading and Edge Computing in the Internet-of-Drones for Deep Learning Applications

    Get PDF
    This article belongs to the Special Issue Time-Sensitive Networks for Unmanned Aircraft SystemsUnmanned Aerial Vehicles (UAVs) have been very effective in collecting aerial images data for various Internet-of-Things (IoT)/smart cities applications such as search and rescue, surveillance, vehicle detection, counting, intelligent transportation systems, to name a few. However, the real-time processing of collected data on edge in the context of the Internet-of-Drones remains an open challenge because UAVs have limited energy capabilities, while computer vision techniquesconsume excessive energy and require abundant resources. This fact is even more critical when deep learning algorithms, such as convolutional neural networks (CNNs), are used for classification and detection. In this paper, we first propose a system architecture of computation offloading for Internet-connected drones. Then, we conduct a comprehensive experimental study to evaluate the performance in terms of energy, bandwidth, and delay of the cloud computation offloading approach versus the edge computing approach of deep learning applications in the context of UAVs. In particular, we investigate the tradeoff between the communication cost and the computation of the two candidate approaches experimentally. The main results demonstrate that the computation offloading approach allows us to provide much higher throughput (i.e., frames per second) as compared to the edge computing approach, despite the larger communication delays.info:eu-repo/semantics/publishedVersio

    Sustainable Task Offloading in UAV Networks via Multi-Agent Reinforcement Learning

    Get PDF
    The recent growth of IoT devices, along with edge computing, has revealed many opportunities for novel applications. Among them, Unmanned Aerial Vehicles (UAVs), which are deployed for surveillance and environmental monitoring, are attracting increasing attention. In this context, typical solutions must deal with events that may change the state of the network, providing a service that continuously maintains a high level of performance. In this paper, we address this problem by proposing a distributed architecture that leverages a Multi-Agent Reinforcement Learning (MARL) technique to dynamically offload tasks from UAVs to the edge cloud. Nodes of the system co-operate to jointly minimize the overall latency perceived by the user and the energy usage on UAVs by continuously learning from the environment the best action, which entails the decision of offloading and, in this case, the best transmission technology, i.e., Wi-Fi or cellular. Results validate our distributed architecture and show the effectiveness of the approach in reaching the above targets

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Mobility-aware mechanisms for fog node discovery and selection

    Get PDF
    The recent development of delay-sensitive applications has led to the emergence of the fog computing paradigm. Within this paradigm, computation nodes present at the edge of the network can act as fog nodes (FNs) capable of processing users' tasks, thus resulting in latency reductions compared to the existing cloud-based execution model. In order to realize the full potential of fog computing, new research questions have arised, mainly due to the dynamic and heterogeneous fog computing context. This thesis focuses on the following questions in particular: How can a user detect the presence of a nearby FN? How should a user on the move adapt its FN discovery strategy, according to its changing context? How should an FN be selected , in the case of user mobility and FN mobility? These questions will be addressed throughout the different contributions of this thesis. The first contribution consists in proposing a discovery solution allowing a user to become aware of the existence of a nearby FN. Using our solution, the FN advertizes its presence using custom WiFi beacons, which will be detected by the user via a scan process. An implementation of this approach has been developed and its evaluation results have shown that it results in a non-negligible energy consumption given its use of WiFi. This has led to our second contribution, which aims at improving the WiFi scan performed in our discovery approach, especially in the case of user mobility. At a first stage, this improvement consisted in embedding information about the topology of the FNs in the beacons the user receives from previous FNs. We have shown that by adapting the scan behavior based on this information, considerable energy savings can be achieved, while guaranteeing a high discovery rate. However, as this approach is associated with a restrictive FN topology structure, we proposed a different alternative, at a second stage. This alternative leverages the history of cellular context information as an indicator allowing the user to infer whether an FN may be present in its current location. If so, the scan will be enabled. Otherwise, it is disabled. The simulation results comparing different classification algorithms have shown that a sequence-based model, such as a hidden-Markov model is able to effectively predict the FN presence in the current user location. While the previous approaches have focused on a sparse FN deployment, our third contribution considers a high density of FNs. Consequently, as there are multiple nearby FNs that can process the user's tasks, it is important to derive a suitable FN selection strategy. This strategy should consider the time-varying set of FNs caused by the user's mobility. Besides, it should minimize the number of switches from one FN to another, in order to maintain a good quality of service. With these considerations in mind, we have shown that an adaptive greedy approach, that selects an FN having a good-enough delay estimate, achieves the best results. Finally, unlike the previous contribution, where the focus has been on FN selection when the user is mobile, our final contribution deals with mobile vehicular FNs (VFNs). Given the mobility of such VFNs, it is important to make the most of their resources, since they are only available for a short time at a given area. So, we propose that, in order to select an appropriate VFN for a given task, a reference roadside unit (RSU) responsible for task assignment can use advice from a neighbor RSU. This advice consists in the VFN that will result in the lowest delay for the current task, based on the experience of the neighbor RSU. The results have shown that, using the provided advice, the reference RSU can observe significant delay reductions. All in all, the proposed contributions have addressed various problems that may arise in a fog computing context and the obtained results can be used to guide the development of the building blocks of future fog computing solutions.El recent desenvolupament d'aplicacions IoT ha comportat l'aparició del paradigma de fog computing. Dins d'aquest paradigma, els nodes de càlcul presents a la vora de la xarxa poden actuar com a “fog nodes'' (FN) capaços de processar les tasques dels usuaris, produint així reduccions de latència en comparació amb el model d'execució basat en núvol. Per assolir tot el potencial del fog computing, han sorgit noves qüestions de recerca, principalment a causa del context dinàmic i heterogeni de fog computing. Aquesta tesi se centra especialment en les qüestions següents: Com pot un usuari detectar la presència d'un FN? Com hauria d’adaptar un usuari en moviment la seva estratègia de descobriment de FN, segons el seu context? Com s’ha de seleccionar un FN, en el cas de la mobilitat dels usuaris i la mobilitat FN? Aquestes preguntes s’abordaran al llarg de les diferents aportacions d’aquesta tesi. La primera contribució consisteix a proposar una solució de descobriment que permeti a l'usuari detectar l’existència d’un FN proper. Mitjançant la nostra solució, un FN anuncia la seva presència mitjançant beacons Wi-Fi personalitzats, que seran detectats per l'usuari mitjançant un procés d’exploració. S'ha desenvolupat una implementació d'aquest enfocament i els seus resultats d’avaluació han demostrat que resulta en un consum d'energia menyspreable donat el seu ús del Wi-Fi. Això ha suposat la nostra segona contribució, que té com a objectiu millorar l’exploració Wi-Fi, especialment en el cas de la mobilitat dels usuaris. En una primera fase, aquesta millora va consistir a incorporar informació sobre la topologia dels FN en les beacons que rep l'usuari dels FN anteriors. Hem demostrat que mitjançant l'adaptació del comportament d'escaneig basat en aquesta informació es pot aconseguir un estalvi considerable d’energia, alhora que es garanteix un índex elevat de descobriment. Tanmateix, com aquest enfocament s'associa a una estructura de topologia FN restrictiva, vam proposar una alternativa diferent, en una segona etapa. Aquesta alternativa aprofita la història de la informació del context cel·lular com a indicador que permet a l'usuari deduir si un FN pot estar present en la seva ubicació. En cas afirmatiu, l'exploració estarà habilitada. Els resultats de la simulació comparant diferents algoritmes de classificació han demostrat que un model basat en seqüències, com un model HMM, és capaç de predir eficaçment la presència de FNs a la ubicació actual de l'usuari. Si bé els enfocaments anteriors s’han centrat en un desplegament escàs de FNs, la nostra tercera contribució considera una alta densitat d'FNs. En conseqüència, com que hi ha múltiples FNs propers que poden processar les tasques de l'usuari, és important derivar una estratègia de selecció de FN adequada. Aquesta estratègia hauria de tenir en compte el conjunt variable de temps causat per la mobilitat de l'usuari. A més, hauria de minimitzar el nombre de canvis d'un FN a un altre, per mantenir una bona qualitat del servei. Tenint en compte aquestes consideracions, hem demostrat que un enfocament codiciós adaptatiu, que selecciona un FN amb una estimació de retard suficient, aconsegueix els millors resultats. Finalment, a diferència de l'aportació anterior, on l'atenció s'ha fixat en la selecció d'FN quan l'usuari és mòbil, la nostra contribució final tracta sobre les FNs per a vehicles mòbils (VFNs). Tenint en compte la mobilitat d’aquests VFNs, és important aprofitar al màxim els seus recursos, ja que només estan disponibles per a un temps curt. Així doncs, proposem que, per seleccionar un VFN adequat per a una tasca, una unitat RSU responsable de l'assignació de tasques pot utilitzar consells d'un RSU veí. Aquest consell consisteix en escollir el VFN que suposarà el menor retard de la tasca actual, en funció de l’experiència del RSU veí. Els resultats han demostrat que ..
    • …
    corecore