102 research outputs found

    Code offloading in opportunistic computing

    Get PDF
    With the advent of cloud computing, applications are no longer tied to a single device, but they can be migrated to a high-performance machine located in a distant data center. The key advantage is the enhancement of performance and consequently, the users experience. This activity is commonly referred computational offloading and it has been strenuously investigated in the past years. The natural candidate for computational offloading is the cloud, but recent results point out the hidden costs of cloud reliance in terms of latency and energy; Cuervo et. al. illustrates the limitations on cloud-based computational offloading based on WANs latency times. The dissertation confirms the results of Cuervo et. al. and illustrates more use cases where the cloud may not be the right choice. This dissertation addresses the following question: is it possible to build a novel approach for offloading the computation that overcomes the limitations of the state-of-the-art? In other words, is it possible to create a computational offloading solution that is able to use local resources when the Cloud is not usable, and remove the strong bond with the local infrastructure? To this extent, I propose a novel paradigm for computation offloading named anyrun computing, whose goal is to use any piece of higher-end hardware (locally or remotely accessible) to offloading a portion of the application. With anyrun computing I removed the boundaries that tie the solution to an infrastructure by adding locally available devices to augment the chances to succeed in offloading. To achieve the goals of the dissertation it is fundamental to have a clear view of all the steps that take part in the offloading process. To this extent, I firstly provided a categorization of such activities combined with their interactions and assessed the impact on the system. The outcome of the analysis is the mapping to the problem to a combinatorial optimization problem that is notoriously known to be NP-Hard. There are a set of well-known approaches to solving such kind of problems, but in this scenario, they cannot be used because they require a global view that can be only maintained by a centralized infrastructure. Thus, local solutions are needed. Moving further, to empirically tackle the anyrun computing paradigm, I propose the anyrun computing framework (ARC), a novel software framework whose objective is to decide whether to offload or not to any resource-rich device willing to lend assistance is advantageous compared to local execution with respect to a rich array of performance dimensions. The core of ARC is the nference nodel which receives a rich set of information about the available remote devices from the SCAMPI opportunistic computing framework developed within the European project SCAMPI, and employs the information to profile a given device, in other words, it decides whether offloading is advantageous compared to local execution, i.e. whether it can reduce the local footprint compared to local execution in the dimensions of interest (CPU and RAM usage, execution time, and energy consumption). To empirically evaluate ARC I presented a set of experimental results on the cloud, cloudlet, and opportunistic domain. In the cloud domain, I used the state of the art in cloud solutions over a set of significant benchmark problems and with three WANs access technologies (i.e. 3G, 4G, and high-speed WAN). The main outcome is that the cloud is an appealing solution for a wide variety of problems, but there is a set of circumstances where the cloud performs poorly. Moreover, I have empirically shown the limitations of cloud-based approaches, specifically, In some circumstances, problems with high transmission costs tend to perform poorly, unless they have high computational needs. The second part of the evaluation is done in opportunistic/cloudlet scenarios where I used my custom-made testbed to compare ARC and MAUI, the state of the art in computation offloading. To this extent, I have performed two distinct experiments: the first with a cloudlet environment and the second with an opportunistic environment. The key outcome is that ARC virtually matches the performances of MAUI (in terms of energy savings) in cloudlet environment, but it improves them by a 50% to 60% in the opportunistic domain

    Raamistik mobiilsete asjade veebile

    Get PDF
    Internet on oma arengus läbi aastate jõudnud järgmisse evolutsioonietappi - asjade internetti (ingl Internet of Things, lüh IoT). IoT ei tähista ühtainsat tehnoloogiat, see võimaldab eri seadmeil - arvutid, mobiiltelefonid, autod, kodumasinad, loomad, virtuaalsensorid, jne - omavahel üle Interneti suhelda, vajamata seejuures pidevat inimesepoolset seadistamist ja juhtimist. Mobiilseadmetest nagu näiteks nutitelefon ja tahvelarvuti on saanud meie igapäevased kaaslased ning oma mitmekülgse võimekusega on nad motiveerinud teadustegevust mobiilse IoT vallas. Nutitelefonid kätkevad endas võimekaid protsessoreid ja 3G/4G tehnoloogiatel põhinevaid internetiühendusi. Kuid kui kasutada seadmeid järjepanu täisvõimekusel, tühjeneb mobiili aku kiirelt. Doktoritöö esitleb energiasäästlikku, kergekaalulist mobiilsete veebiteenuste raamistikku anduriandmete kogumiseks, kasutades kergemaid, energiasäästlikumaid suhtlustprotokolle, mis on IoT keskkonnale sobilikumad. Doktoritöö käsitleb põhjalikult energia kokkuhoidu mobiilteenuste majutamisel. Töö käigus loodud raamistikud on kontseptsiooni tõestamiseks katsetatud mitmetes juhtumiuuringutes päris seadmetega.The Internet has evolved, over the years, from just being the Internet to become the Internet of Things (IoT), the next step in its evolution. IoT is not a single technology and it enables about everything from computers, mobile phones, cars, appliances, animals, virtual sensors, etc. that connect and interact with each other over the Internet to function free from human interaction. Mobile devices like the Smartphone and tablet PC have now become essential to everyday life and with extended capabilities have motivated research related to the mobile Internet of Things. Although, the recently developed Smartphones enjoy the high performance and high speed 3G/4G mobile Internet data transmission services, such high speed performances quickly drain the battery power of the mobile device. This thesis presents an energy efficient lightweight mobile Web service provisioning framework for mobile sensing utilizing the protocols that were designed for the constrained IoT environment. Lightweight protocols provide an energy efficient way of communication. Finally, this thesis highlights the energy conservation of the mobile Web service provisioning, the developed framework, extensively. Several case studies with the use of the proposed framework were implemented on real devices and has been thoroughly tested as a proof-of-concept.https://www.ester.ee/record=b522498

    From MANET to people-centric networking: Milestones and open research challenges

    Get PDF
    In this paper, we discuss the state of the art of (mobile) multi-hop ad hoc networking with the aim to present the current status of the research activities and identify the consolidated research areas, with limited research opportunities, and the hot and emerging research areas for which further research is required. We start by briefly discussing the MANET paradigm, and why the research on MANET protocols is now a cold research topic. Then we analyze the active research areas. Specifically, after discussing the wireless-network technologies, we analyze four successful ad hoc networking paradigms, mesh networks, opportunistic networks, vehicular networks, and sensor networks that emerged from the MANET world. We also present an emerging research direction in the multi-hop ad hoc networking field: people centric networking, triggered by the increasing penetration of the smartphones in everyday life, which is generating a people-centric revolution in computing and communications

    Do we all really know what a fog node is? Current trends towards an open definition

    Get PDF
    Fog computing has emerged as a promising technology that can bring cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and particularly a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and end up showing how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.Postprint (author's final draft

    Mobilouds: An Energy Efficient MCC Collaborative Framework With Extended Mobile Participation for Next Generation Networks

    Get PDF
    Given the emergence of mobile cloud computing (MCC), its associated energy implications are witnessed at larger scale. With offloading computationally intensive tasks to the cloud datacentres being the basic concept behind MCC, most of the mobile terminal resources participating in the MCC collaborative execution are wasted as they remain idle until the mobile terminals receive responses from the datacentres. This is an additional wastage of resources alongside the cloud resources are already being addressed as massive energy consumers. Though the energy consumed of the idle mobile resources is insignificant in comparison with the cloud counterpart, such consumptions have drastic impacts on the mobile devices causing unnecessary battery drains. To this end, this paper proposes Mobilouds which encompass a multi-tier processing architecture with various levels of process cluster capacities and a software application to manage energy efficient utilization of such process clusters. Our proposed Mobilouds framework encourages the mobile device participation in the MCC collaborative execution, thereby reduces the presence of idle mobile resources and utilizes such idle resources in the actual task execution. Our performance evaluation results demonstrate that the Mobilouds framework offers the most energy-time balancing process clusters for task execution by effectively utilizing the available resources, in comparison with an entire cloud offloading strategy using 5G/4G networks

    Mobile Big Data Analytics in Healthcare

    Get PDF
    Mobile and ubiquitous devices are everywhere around us generating considerable amount of data. The concept of mobile computing and analytics is expanding due to the fact that we are using mobile devices day in and out without even realizing it. These mobile devices use Wi-Fi, Bluetooth or mobile data to be intermittently connected to the world, generating, sending and receiving data on the move. Latest mobile applications incorporating graphics, video and audio are main causes of loading the mobile devices by consuming battery, memory and processing power. Mobile Big data analytics includes for instance, big health data, big location data, big social media data, and big heterogeneous data. Healthcare is undoubtedly one of the most data-intensive industries nowadays and the challenge is not only in acquiring, storing, processing and accessing data, but also in engendering useful insights out of it. These insights generated from health data may reduce health monitoring cost, enrich disease diagnosis, therapy, and care and even lead to human lives saving. The challenge in mobile data and Big data analytics is how to meet the growing performance demands of these activities while minimizing mobile resource consumption. This thesis proposes a scalable architecture for mobile big data analytics implementing three new algorithms (i.e. Mobile resources optimization, Mobile analytics customization and Mobile offloading), for the effective usage of resources in performing mobile data analytics. Mobile resources optimization algorithm monitors the resources and switches off unused network connections and application services whenever resources are limited. However, analytics customization algorithm attempts to save energy by customizing the analytics process while implementing some data-aware techniques. Finally, mobile offloading algorithm decides on the fly whether to process data locally or delegate it to a Cloud back-end server. The ultimate goal of this research is to provide healthcare decision makers with the advancements in mobile Big data analytics and support them in handling large and heterogeneous health datasets effectively on the move
    corecore