62 research outputs found

    Code offloading in opportunistic computing

    Get PDF
    With the advent of cloud computing, applications are no longer tied to a single device, but they can be migrated to a high-performance machine located in a distant data center. The key advantage is the enhancement of performance and consequently, the users experience. This activity is commonly referred computational offloading and it has been strenuously investigated in the past years. The natural candidate for computational offloading is the cloud, but recent results point out the hidden costs of cloud reliance in terms of latency and energy; Cuervo et. al. illustrates the limitations on cloud-based computational offloading based on WANs latency times. The dissertation confirms the results of Cuervo et. al. and illustrates more use cases where the cloud may not be the right choice. This dissertation addresses the following question: is it possible to build a novel approach for offloading the computation that overcomes the limitations of the state-of-the-art? In other words, is it possible to create a computational offloading solution that is able to use local resources when the Cloud is not usable, and remove the strong bond with the local infrastructure? To this extent, I propose a novel paradigm for computation offloading named anyrun computing, whose goal is to use any piece of higher-end hardware (locally or remotely accessible) to offloading a portion of the application. With anyrun computing I removed the boundaries that tie the solution to an infrastructure by adding locally available devices to augment the chances to succeed in offloading. To achieve the goals of the dissertation it is fundamental to have a clear view of all the steps that take part in the offloading process. To this extent, I firstly provided a categorization of such activities combined with their interactions and assessed the impact on the system. The outcome of the analysis is the mapping to the problem to a combinatorial optimization problem that is notoriously known to be NP-Hard. There are a set of well-known approaches to solving such kind of problems, but in this scenario, they cannot be used because they require a global view that can be only maintained by a centralized infrastructure. Thus, local solutions are needed. Moving further, to empirically tackle the anyrun computing paradigm, I propose the anyrun computing framework (ARC), a novel software framework whose objective is to decide whether to offload or not to any resource-rich device willing to lend assistance is advantageous compared to local execution with respect to a rich array of performance dimensions. The core of ARC is the nference nodel which receives a rich set of information about the available remote devices from the SCAMPI opportunistic computing framework developed within the European project SCAMPI, and employs the information to profile a given device, in other words, it decides whether offloading is advantageous compared to local execution, i.e. whether it can reduce the local footprint compared to local execution in the dimensions of interest (CPU and RAM usage, execution time, and energy consumption). To empirically evaluate ARC I presented a set of experimental results on the cloud, cloudlet, and opportunistic domain. In the cloud domain, I used the state of the art in cloud solutions over a set of significant benchmark problems and with three WANs access technologies (i.e. 3G, 4G, and high-speed WAN). The main outcome is that the cloud is an appealing solution for a wide variety of problems, but there is a set of circumstances where the cloud performs poorly. Moreover, I have empirically shown the limitations of cloud-based approaches, specifically, In some circumstances, problems with high transmission costs tend to perform poorly, unless they have high computational needs. The second part of the evaluation is done in opportunistic/cloudlet scenarios where I used my custom-made testbed to compare ARC and MAUI, the state of the art in computation offloading. To this extent, I have performed two distinct experiments: the first with a cloudlet environment and the second with an opportunistic environment. The key outcome is that ARC virtually matches the performances of MAUI (in terms of energy savings) in cloudlet environment, but it improves them by a 50% to 60% in the opportunistic domain

    Edge Offloading in Smart Grid

    Full text link
    The energy transition supports the shift towards more sustainable energy alternatives, paving towards decentralized smart grids, where the energy is generated closer to the point of use. The decentralized smart grids foresee novel data-driven low latency applications for improving resilience and responsiveness, such as peer-to-peer energy trading, microgrid control, fault detection, or demand response. However, the traditional cloud-based smart grid architectures are unable to meet the requirements of the new emerging applications such as low latency and high-reliability thus alternative architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge offloading can play a pivotal role for the next-generation smart grid AI applications because it enables the efficient utilization of computing resources and addresses the challenges of increasing data generated by IoT devices, optimizing the response time, energy consumption, and network performance. However, a comprehensive overview of the current state of research is needed to support sound decisions regarding energy-related applications offloading from cloud to fog or edge, focusing on smart grid open challenges and potential impacts. In this paper, we delve into smart grid and computational distribution architec-tures, including edge-fog-cloud models, orchestration architecture, and serverless computing, and analyze the decision-making variables and optimization algorithms to assess the efficiency of edge offloading. Finally, the work contributes to a comprehensive understanding of the edge offloading in smart grid, providing a SWOT analysis to support decision making.Comment: to be submitted to journa

    Towards Proactive Mobility-Aware Fog Computing

    Get PDF
    Paljude värkvõrk- ja ärirakenduste tavapäraseks osaks on sõltuvus kaugete pilveteenuste poolt pakutavast andmetöötlusvõimekusest. Arvestatav hulk seesugustest rakendustest koguvad andmeid mitmetelt ümbritsevatelt heterogeensetelt seadmetelt, et pakkuda reaalajal põhinevaid teenuseid oma kasutajatele. Taolise lahenduse negatiivseks küljeks on aga kõrge viiteaeg, mis muutub eriti problemaatiliseks, kui vastava rakenduse efektiivne töö on väleda vastuse saamisega otseses sõltuvuses. Taolise olukorra puhul on viiteaja vähendamiseks välja pakutud uduandmetöötlusel põhinev arhitektuur, mis kujutab endast arvutusmahukate andmetöötlusühikute jaotamist andmeallikate ja lõppkasutajatele lähedal asuvatele arvutusseadmetele. Vaatamata sellele, et uduandmetöötlusel põhinev arhitektuur on paljutõotav, toob see kaasa uusi väljakutseid seoses kvaliteetse uduandmetöötlusteenuse pakkumisega mobiilsetele kasutajatele. Käesolev magistritöö käsitleb proaktiivset lähenemist uduandmetöötlusele, kasutades selleks lähedalasuvatel kasutajatel baseeruvat mobiilset ad hoc võrgustikku, mis võimaldab uduteenusetuvastust ja juurdepääsu ilma pilveteenuse abi kasutamata. Proaktiivset lähenemist kasutatakse nii teenusetuvastuse ja arvutuse migratsiooni kui ka otsese uduteenuse pakkumise käigus, kiirendades arvutusühikute jaotusprotsessi ning parendadades arvutuste jaotust vastavalt käitusaegsele kontekstiinfole (nt. arvutusseadmete hetkevõimekus). Lisaks uuriti uduarvutuse rakendusviisi mobiilses sotsiaal–silmusvõrgustikus, tehes andmeedastuseks optimaalseima valiku vastavalt kuluefektiivsuse indeksile. Lähtudes katsetest nii päris seadmete kui simulaatoritega, viidi läbi käesoleva magistritöö komponentide kontseptuaalsete prototüüpide testhindamine.A common approach for many Internet of Things (IoT) and business applications is to rely on distant Cloud services for the processing of data. Several of these applications collect data from a multitude of proximity-based ubiquitous resources to provide various real-time services for their users. However, this has the downside of resulting in explicit latency of the result, being especially problematic when the application requires a rapid response in the edge network. Therefore, researchers have proposed the Fog computing architecture that distributes the computational data processing tasks to the edge network nodes located in the vicinity of the data sources and end-users, to reduce the latency. Although the Fog computing architecture is promising, it still faces challenges in many areas, especially when dealing with support for mobile users. Utilizing Fog for real-time mobile applications faces the new challenge of ensuring the seamless accessibility of Fog services on the move. Further, Fog computing also faces a challenge in mobility when the tasks originate from mobile ubiquitous applications in which the data sources are moving objects. In this thesis, a proactive approach for Fog computing is proposed, which supports proactive Fog service discovery and process migration using Mobile Ad hoc Social Network in proximity, enabling Fog-assisted ubiquitous service provisioning in proximity without distant Cloud services. Moreover, a proactive approach is also applied for the Fog service provisioning itself, in order to hasten the task distribution process in Mobile Fog use cases and provide an optimization scheme based on runtime context information. In addition, a case study regarding the usage of Fog Computing for the enhancement of Mobile Mesh Social Network was presented, along with a resource-aware Cost-Performance Index scheme to assist choosing the approach to be used for transmission of data. The proposed elements have been evaluated by utilizing a combination of real devices and simulators in order to provide proof-of-concept

    Agent-Based System for Mobile Service Adaptation Using Online Machine Learning and Mobile Cloud Computing Paradigm

    Get PDF
    An important aspect of modern computer systems is their ability to adapt. This is particularly important in the context of the use of mobile devices, which have limited resources and are able to work longer and more efficiently through adaptation. One possibility for the adaptation of mobile service execution is the use of the Mobile Cloud Computing (MCC) paradigm, which allows such services to run in computational clouds and only return the result to the mobile device. At the same time, the importance of machine learning used to optimize various computer systems is increasing. The novel concept proposed by the authors extends the MCC paradigm to add the ability to run services on a PC (e.g. at home). The solution proposed utilizes agent-based concepts in order to create a system that operates in a heterogeneous environment. Machine learning algorithms are used to optimize the performance of mobile services online on mobile devices. This guarantees scalability and privacy. As a result, the solution makes it possible to reduce service execution time and power consumption by mobile devices. In order to evaluate the proposed concept, an agent-based system for mobile service adaptation was implemented and experiments were performed. The solution developed demonstrates that extending the MCC paradigm with the simultaneous use of machine learning and agent-based concepts allows for the effective adaptation and optimization of mobile services

    Multisite adaptive computation offloading for mobile cloud applications

    Get PDF
    The sheer amount of mobile devices and their fast adaptability have contributed to the proliferation of modern advanced mobile applications. These applications have characteristics such as latency-critical and demand high availability. Also, these kinds of applications often require intensive computation resources and excessive energy consumption for processing, a mobile device has limited computation and energy capacity because of the physical size constraints. The heterogeneous mobile cloud environment consists of different computing resources such as remote cloud servers in faraway data centres, cloudlets whose goal is to bring the cloud closer to the users, and nearby mobile devices that can be utilised to offload mobile tasks. Heterogeneity in mobile devices and the different sites include software, hardware, and technology variations. Resource-constrained mobile devices can leverage the shared resource environment to offload their intensive tasks to conserve battery life and improve the overall application performance. However, with such a loosely coupled and mobile device dominating network, new challenges and problems such as how to seamlessly leverage mobile devices with all the offloading sites, how to simplify deploying runtime environment for serving offloading requests from mobile devices, how to identify which parts of the mobile application to offload and how to decide whether to offload them and how to select the most optimal candidate offloading site among others. To overcome the aforementioned challenges, this research work contributes the design and implementation of MAMoC, a loosely coupled end-to-end mobile computation offloading framework. Mobile applications can be adapted to the client library of the framework while the server components are deployed to the offloading sites for serving offloading requests. The evaluation of the offloading decision engine demonstrates the viability of the proposed solution for managing seamless and transparent offloading in distributed and dynamic mobile cloud environments. All the implemented components of this work are publicly available at the following URL: https://github.com/mamoc-repo

    Time-Optimized Task Offloading Decision Making in Mobile Edge Computing

    Get PDF
    Mobile Edge Computing application domains such as vehicular networks, unmanned aerial vehicles, data analytics tasks at the edge and augmented reality have recently emerged. Under such domains, while mobile nodes are moving and have certain tasks to be offloaded to Edge Servers, choosing an appropriate time and an ideally suited server to guarantee the quality of service can be challenging. We tackle the offloading decision making problem by adopting the principles of Optimal Stopping Theory to minimize the execution delay in a sequential decision manner. A performance evaluation is provided by using real data sets compared with the optimal solution. The results show that our approach significantly minimizes the execution delay for task execution and the results are very close to the optimal solution

    MAMoC: Multisite Adaptive offloading framework for Mobile Cloud applications

    Get PDF
    This paper presents MAMoC, a framework which brings together a diverse range of infrastructure types including mobile devices, cloudlets, and remote cloud resources under one unified API. MAMoC allows mobile applications to leverage the power of multiple offloading destinations. MAMoC's intelligent offloading decision engine adapts to the contextual changes in this heterogeneous environment, in order to reduce the overall runtime for both single-site and multi-site offloading scenarios. MAMoC is evaluated through a set of offloading experiments, which evaluate the performance of our offloading decision engine. The results show that offloading computation using our framework can reduce the overall task completion time for both single-site and multi-site offloading scenarios.Postprin
    corecore