13 research outputs found

    Adaptive Q-learning-supported Resource Allocation Model in Vehicular Fogs

    Get PDF
    Urban computing has become a significant driver in supporting the delivery and sharing of services, being a strong ally to intelligent transportation. Smart vehicles present computing and communication capabilities that allow them to enable many autonomous vehicular safety and infotainment applications. Vehicular Cloud Computing (VCC) has already proven to be a technology shifting paradigm harnessing the computation resources from on board units from vehicles to form clustered computing units to solve real world computing problems. However, with the rise of vehicular application use and intermittent network conditions, VCC exhibits many drawbacks. Vehicular Fog computing appears as a new paradigm in enabling and facilitating efficient service and resource sharing in urban environments. Several vehicular resource management works have attempted to deal with the highly dynamic vehicular environment following diverse approaches, e.g. MDP, SMDP, and policy-based greedy techniques. However, the high vehicular mobility causes several challenges compromising consistency, efficiency, and quality of service. RL-enabled adaptive vehicular Fogs can deal with the mobility for properly distributing load and resources over Fogs. Thus, we propose a mobility-based cloudlet dwell time estimation method for accurately estimating vehicular resources in a Fog. Leveraging the CDT estimation model, we devise an adaptive and highly dynamic resource allocation model using mathematical formula for Fog selection, and reinforcement learning for iterative review and feedback mechanism for generating optimal resource allocation policy

    Mobility-aware mechanisms for fog node discovery and selection

    Get PDF
    The recent development of delay-sensitive applications has led to the emergence of the fog computing paradigm. Within this paradigm, computation nodes present at the edge of the network can act as fog nodes (FNs) capable of processing users' tasks, thus resulting in latency reductions compared to the existing cloud-based execution model. In order to realize the full potential of fog computing, new research questions have arised, mainly due to the dynamic and heterogeneous fog computing context. This thesis focuses on the following questions in particular: How can a user detect the presence of a nearby FN? How should a user on the move adapt its FN discovery strategy, according to its changing context? How should an FN be selected , in the case of user mobility and FN mobility? These questions will be addressed throughout the different contributions of this thesis. The first contribution consists in proposing a discovery solution allowing a user to become aware of the existence of a nearby FN. Using our solution, the FN advertizes its presence using custom WiFi beacons, which will be detected by the user via a scan process. An implementation of this approach has been developed and its evaluation results have shown that it results in a non-negligible energy consumption given its use of WiFi. This has led to our second contribution, which aims at improving the WiFi scan performed in our discovery approach, especially in the case of user mobility. At a first stage, this improvement consisted in embedding information about the topology of the FNs in the beacons the user receives from previous FNs. We have shown that by adapting the scan behavior based on this information, considerable energy savings can be achieved, while guaranteeing a high discovery rate. However, as this approach is associated with a restrictive FN topology structure, we proposed a different alternative, at a second stage. This alternative leverages the history of cellular context information as an indicator allowing the user to infer whether an FN may be present in its current location. If so, the scan will be enabled. Otherwise, it is disabled. The simulation results comparing different classification algorithms have shown that a sequence-based model, such as a hidden-Markov model is able to effectively predict the FN presence in the current user location. While the previous approaches have focused on a sparse FN deployment, our third contribution considers a high density of FNs. Consequently, as there are multiple nearby FNs that can process the user's tasks, it is important to derive a suitable FN selection strategy. This strategy should consider the time-varying set of FNs caused by the user's mobility. Besides, it should minimize the number of switches from one FN to another, in order to maintain a good quality of service. With these considerations in mind, we have shown that an adaptive greedy approach, that selects an FN having a good-enough delay estimate, achieves the best results. Finally, unlike the previous contribution, where the focus has been on FN selection when the user is mobile, our final contribution deals with mobile vehicular FNs (VFNs). Given the mobility of such VFNs, it is important to make the most of their resources, since they are only available for a short time at a given area. So, we propose that, in order to select an appropriate VFN for a given task, a reference roadside unit (RSU) responsible for task assignment can use advice from a neighbor RSU. This advice consists in the VFN that will result in the lowest delay for the current task, based on the experience of the neighbor RSU. The results have shown that, using the provided advice, the reference RSU can observe significant delay reductions. All in all, the proposed contributions have addressed various problems that may arise in a fog computing context and the obtained results can be used to guide the development of the building blocks of future fog computing solutions.El recent desenvolupament d'aplicacions IoT ha comportat l'aparició del paradigma de fog computing. Dins d'aquest paradigma, els nodes de càlcul presents a la vora de la xarxa poden actuar com a “fog nodes'' (FN) capaços de processar les tasques dels usuaris, produint així reduccions de latència en comparació amb el model d'execució basat en núvol. Per assolir tot el potencial del fog computing, han sorgit noves qüestions de recerca, principalment a causa del context dinàmic i heterogeni de fog computing. Aquesta tesi se centra especialment en les qüestions següents: Com pot un usuari detectar la presència d'un FN? Com hauria d’adaptar un usuari en moviment la seva estratègia de descobriment de FN, segons el seu context? Com s’ha de seleccionar un FN, en el cas de la mobilitat dels usuaris i la mobilitat FN? Aquestes preguntes s’abordaran al llarg de les diferents aportacions d’aquesta tesi. La primera contribució consisteix a proposar una solució de descobriment que permeti a l'usuari detectar l’existència d’un FN proper. Mitjançant la nostra solució, un FN anuncia la seva presència mitjançant beacons Wi-Fi personalitzats, que seran detectats per l'usuari mitjançant un procés d’exploració. S'ha desenvolupat una implementació d'aquest enfocament i els seus resultats d’avaluació han demostrat que resulta en un consum d'energia menyspreable donat el seu ús del Wi-Fi. Això ha suposat la nostra segona contribució, que té com a objectiu millorar l’exploració Wi-Fi, especialment en el cas de la mobilitat dels usuaris. En una primera fase, aquesta millora va consistir a incorporar informació sobre la topologia dels FN en les beacons que rep l'usuari dels FN anteriors. Hem demostrat que mitjançant l'adaptació del comportament d'escaneig basat en aquesta informació es pot aconseguir un estalvi considerable d’energia, alhora que es garanteix un índex elevat de descobriment. Tanmateix, com aquest enfocament s'associa a una estructura de topologia FN restrictiva, vam proposar una alternativa diferent, en una segona etapa. Aquesta alternativa aprofita la història de la informació del context cel·lular com a indicador que permet a l'usuari deduir si un FN pot estar present en la seva ubicació. En cas afirmatiu, l'exploració estarà habilitada. Els resultats de la simulació comparant diferents algoritmes de classificació han demostrat que un model basat en seqüències, com un model HMM, és capaç de predir eficaçment la presència de FNs a la ubicació actual de l'usuari. Si bé els enfocaments anteriors s’han centrat en un desplegament escàs de FNs, la nostra tercera contribució considera una alta densitat d'FNs. En conseqüència, com que hi ha múltiples FNs propers que poden processar les tasques de l'usuari, és important derivar una estratègia de selecció de FN adequada. Aquesta estratègia hauria de tenir en compte el conjunt variable de temps causat per la mobilitat de l'usuari. A més, hauria de minimitzar el nombre de canvis d'un FN a un altre, per mantenir una bona qualitat del servei. Tenint en compte aquestes consideracions, hem demostrat que un enfocament codiciós adaptatiu, que selecciona un FN amb una estimació de retard suficient, aconsegueix els millors resultats. Finalment, a diferència de l'aportació anterior, on l'atenció s'ha fixat en la selecció d'FN quan l'usuari és mòbil, la nostra contribució final tracta sobre les FNs per a vehicles mòbils (VFNs). Tenint en compte la mobilitat d’aquests VFNs, és important aprofitar al màxim els seus recursos, ja que només estan disponibles per a un temps curt. Així doncs, proposem que, per seleccionar un VFN adequat per a una tasca, una unitat RSU responsable de l'assignació de tasques pot utilitzar consells d'un RSU veí. Aquest consell consisteix en escollir el VFN que suposarà el menor retard de la tasca actual, en funció de l’experiència del RSU veí. Els resultats han demostrat que ..

    Do we all really know what a fog node is? Current trends towards an open definition

    Get PDF
    Fog computing has emerged as a promising technology that can bring cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and particularly a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and end up showing how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.Postprint (author's final draft

    A survey on intelligent computation offloading and pricing strategy in UAV-Enabled MEC network: Challenges and research directions

    Get PDF
    The lack of resource constraints for edge servers makes it difficult to simultaneously perform a large number of Mobile Devices’ (MDs) requests. The Mobile Network Operator (MNO) must then select how to delegate MD queries to its Mobile Edge Computing (MEC) server in order to maximize the overall benefit of admitted requests with varying latency needs. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligent (AI) can increase MNO performance because of their flexibility in deployment, high mobility of UAV, and efficiency of AI algorithms. There is a trade-off between the cost incurred by the MD and the profit received by the MNO. Intelligent computing offloading to UAV-enabled MEC, on the other hand, is a promising way to bridge the gap between MDs' limited processing resources, as well as the intelligent algorithms that are utilized for computation offloading in the UAV-MEC network and the high computing demands of upcoming applications. This study looks at some of the research on the benefits of computation offloading process in the UAV-MEC network, as well as the intelligent models that are utilized for computation offloading in the UAV-MEC network. In addition, this article examines several intelligent pricing techniques in different structures in the UAV-MEC network. Finally, this work highlights some important open research issues and future research directions of Artificial Intelligent (AI) in computation offloading and applying intelligent pricing strategies in the UAV-MEC network

    Game Theory-based Allocation Management in VCC Networks

    Get PDF
    Vehicular Ad-hoc Networks (VANETs) have contributed significantly towards improving road traffic management and safety. VANETs, integrated with Vehicular Clouds, enable underutilized vehicular resources for efficient resource management, fulfilling service requests. However, due to the frequently changing network topology of vehicular cloud networks, the vehicles frequently move out of the coverage area of roadside units (RSUs), disconnecting from the RSUs and interrupting the fulfillment of ongoing service requests. In addition, working with heterogeneous vehicles makes it difficult to match the service requests with the varying resources of individual vehicles. Therefore, to address these challenges, this work introduces the concept of clustering resources from nearby vehicles to form Combined Resource Units (CRUs). These units contribute to maximizing the rate of fulfillment of service requests. CRU composition is helpful, especially for the heterogeneity of vehicles, since it allows clustering the varying resources of vehicles into a single unit. The vehicle resources are clustered into CRUs based on three different sized pools, making the service matching process more time-efficient. Previous works have adopted stochastic models for resource clustering configurations. However, this work adopts distinct search algorithms for CRU composition, which are computationally less complex. Results showed that light-weight search algorithms, such as selective search algorithm (SSA), achieved close to 80% of resource availability without over-assembling CRUs in higher density scenarios. Following CRU composition, a game-theoretical approach is opted for allocating CRUs to service requests. Under this approach, the CRUs play a non-cooperative game to maximize their utility, contributing to factors such as fairness, efficiency, improved system performance and reduced system overhead. The utility value takes into account the RSS (Received Signal Strength) value of each CRU and the resources required in fulfilling a request. Results of the game model showed that the proposed approach of CRU composition obtained 90% success rate towards matching and fulfilling service requests
    corecore