20 research outputs found

    Leveraging Resources on Anonymous Mobile Edge Nodes

    Get PDF
    Smart devices have become an essential component in the life of mankind. The quick rise of smartphones, IoTs, and wearable devices enabled applications that were not possible few years ago, e.g., health monitoring and online banking. Meanwhile, smart sensing laid the infrastructure for smart homes and smart cities. The intrusive nature of smart devices granted access to huge amounts of raw data. Researchers seized the moment with complex algorithms and data models to process the data over the cloud and extract as much information as possible. However, the pace and amount of data generation, in addition to, networking protocols transmitting data to cloud servers failed short in touching more than 20% of what was generated on the edge of the network. On the other hand, smart devices carry a large set of resources, e.g., CPU, memory, and camera, that sit idle most of the time. Studies showed that for plenty of the time resources are either idle, e.g., sleeping and eating, or underutilized, e.g. inertial sensors during phone calls. These findings articulate a problem in processing large data sets, while having idle resources in the close proximity. In this dissertation, we propose harvesting underutilized edge resources then use them in processing the huge data generated, and currently wasted, through applications running at the edge of the network. We propose flipping the concept of cloud computing, instead of sending massive amounts of data for processing over the cloud, we distribute lightweight applications to process data on users\u27 smart devices. We envision this approach to enhance the network\u27s bandwidth, grant access to larger datasets, provide low latency responses, and more importantly involve up-to-date user\u27s contextual information in processing. However, such benefits come with a set of challenges: How to locate suitable resources? How to match resources with data providers? How to inform resources what to do? and When? How to orchestrate applications\u27 execution on multiple devices? and How to communicate between devices on the edge? Communication between devices at the edge has different parameters in terms of device mobility, topology, and data rate. Standard protocols, e.g., Wi-Fi or Bluetooth, were not designed for edge computing, hence, does not offer a perfect match. Edge computing requires a lightweight protocol that provides quick device discovery, decent data rate, and multicasting to devices in the proximity. Bluetooth features wide acceptance within the IoT community, however, the low data rate and unicast communication limits its use on the edge. Despite being the most suitable communication protocol for edge computing and unlike other protocols, Bluetooth has a closed source code that blocks lower layer in front of all forms of research study, enhancement, and customization. Hence, we offer an open source version of Bluetooth and then customize it for edge computing applications. In this dissertation, we propose Leveraging Resources on Anonymous Mobile Edge Nodes (LAMEN), a three-tier framework where edge devices are clustered by proximities. On having an application to execute, LAMEN clusters discover and allocate resources, share application\u27s executable with resources, and estimate incentives for each participating resource. In a cluster, a single head node, i.e., mediator, is responsible for resource discovery and allocation. Mediators orchestrate cluster resources and present them as a virtually large homogeneous resource. For example, two devices each offering either a camera or a speaker are presented outside the cluster as a single device with both camera and speaker, this can be extended to any combination of resources. Then, mediator handles applications\u27 distribution within a cluster as needed. Also, we provide a communication protocol that is customizable to the edge environment and application\u27s need. Pushing lightweight applications that end devices can execute over their locally generated data have the following benefits: First, avoid sharing user data with cloud server, which is a privacy concern for many of them; Second, introduce mediators as a local cloud controller closer to the edge; Third, hide the user\u27s identity behind mediators; and Finally, enhance bandwidth utilization by keeping raw data at the edge and transmitting processed information. Our evaluation shows an optimized resource lookup and application assignment schemes. In addition to, scalability in handling networks with large number of devices. In order to overcome the communication challenges, we provide an open source communication protocol that we customize for edge computing applications, however, it can be used beyond the scope of LAMEN. Finally, we present three applications to show how LAMEN enables various application domains on the edge of the network. In summary, we propose a framework to orchestrate underutilized resources at the edge of the network towards processing data that are generated in their proximity. Using the approaches explained later in the dissertation, we show how LAMEN enhances the performance of applications and enables a new set of applications that were not feasible

    Teenustele orienteeritud ja tÔendite-teadlik mobiilne pilvearvutus

    Get PDF
    Arvutiteaduses on kaks kĂ”ige suuremat jĂ”udu: mobiili- ja pilvearvutus. Kui pilvetehnoloogia pakub kasutajale keerukate ĂŒlesannete lahendamiseks salvestus- ning arvutusplatvormi, siis nutitelefon vĂ”imaldab lihtsamate ĂŒlesannete lahendamist mistahes asukohas ja mistahes ajal. TĂ€psemalt on mobiilseadmetel vĂ”imalik pilve vĂ”imalusi Ă€ra kasutades energiat sÀÀsta ning jagu saada kasvavast jĂ”udluse ja ruumi vajadusest. Sellest tulenevalt on kĂ€esoleva töö peamiseks kĂŒsimuseks kuidas tuua pilveinfrastruktuur mobiilikasutajale lĂ€hemale? Antud töös uurisime kuidas mobiiltelefoni pilveteenust saab mobiilirakendustesse integreerida. Saime teada, et töö delegeerimine pilve eeldab mitmete pilve aspektide kaalumist ja integreerimist, nagu nĂ€iteks ressursimahukas töötlemine, asĂŒnkroonne suhtlus kliendiga, programmaatiline ressursside varustamine (Web APIs) ja pilvedevaheline kommunikatsioon. Nende puuduste ĂŒletamiseks lĂ”ime Mobiilse pilve vahevara Mobile Cloud Middleware (Mobile Cloud Middleware - MCM) raamistiku, mis kasutab deklaratiivset teenuste komponeerimist, et delegeerida töid mobiililt mitmetele pilvedele kasutades minimaalset andmeedastust. Teisest kĂŒljest on nĂ€idatud, et koodi teisaldamine on peamisi strateegiaid seadme energiatarbimise vĂ€hendamiseks ning jĂ”udluse suurendamiseks. Sellegipoolest on koodi teisaldamisel miinuseid, mis takistavad selle laialdast kasutuselevĂ”ttu. Selles töös uurime lisaks, mis takistab koodi mahalaadimise kasutuselevĂ”ttu ja pakume lahendusena vĂ€lja raamistiku EMCO, mis kogub seadmetelt infot koodi jooksutamise kohta erinevates kontekstides. Neid andmeid analĂŒĂŒsides teeb EMCO kindlaks, mis on sobivad tingimused koodi maha laadimiseks. VĂ”rreldes kogutud andmeid, suudab EMCO jĂ€reldada, millal tuleks mahalaadimine teostada. EMCO modelleerib kogutud andmeid jaotuse mÀÀra jĂ€rgi lokaalsete- ning pilvejuhtude korral. Neid jaotusi vĂ”rreldes tuletab EMCO tĂ€psed atribuudid, mille korral mobiilirakendus peaks koodi maha laadima. VĂ”rreldes EMCO-t teiste nĂŒĂŒdisaegsete mahalaadimisraamistikega, tĂ”useb EMCO efektiivsuse poolest esile. LĂ”puks uurisime kuidas arvutuste maha laadimist Ă€ra kasutada, et tĂ€iustada kasutaja kogemust pideval mobiilirakenduse kasutamisel. Meie peamiseks motivatsiooniks, et sellist adaptiivset tööde tĂ€itmise kiirendamist pakkuda, on tagada kasutuskvaliteet (QoE), mis muutub vastavalt kasutajale, aidates seelĂ€bi suurendada mobiilirakenduse eluiga.Mobile and cloud computing are two of the biggest forces in computer science. While the cloud provides to the user the ubiquitous computational and storage platform to process any complex tasks, the smartphone grants to the user the mobility features to process simple tasks, anytime and anywhere. Smartphones, driven by their need for processing power, storage space and energy saving are looking towards remote cloud infrastructure in order to solve these problems. As a result, the main research question of this work is how to bring the cloud infrastructure closer to the mobile user? In this thesis, we investigated how mobile cloud services can be integrated within the mobile apps. We found out that outsourcing a task to cloud requires to integrate and consider multiple aspects of the clouds, such as resource-intensive processing, asynchronous communication with the client, programmatically provisioning of resources (Web APIs) and cloud intercommunication. Hence, we proposed a Mobile Cloud Middleware (MCM) framework that uses declarative service composition to outsource tasks from the mobile to multiple clouds with minimal data transfer. On the other hand, it has been demonstrated that computational offloading is a key strategy to extend the battery life of the device and improves the performance of the mobile apps. We also investigated the issues that prevent the adoption of computational offloading, and proposed a framework, namely Evidence-aware Mobile Computational Offloading (EMCO), which uses a community of devices to capture all the possible context of code execution as evidence. By analyzing the evidence, EMCO aims to determine the suitable conditions to offload. EMCO models the evidence in terms of distributions rates for both local and remote cases. By comparing those distributions, EMCO infers the right properties to offload. EMCO shows to be more effective in comparison with other computational offloading frameworks explored in the state of the art. Finally, we investigated how computational offloading can be utilized to enhance the perception that the user has towards an app. Our main motivation behind accelerating the perception at multiple response time levels is to provide adaptive quality-of-experience (QoE), which can be used as mean of engagement strategy that increases the lifetime of a mobile app

    Adaptive Process Distribution at the Edge of IoT using the Integration of BPMS and Containerization

    Get PDF
    TĂ€na levivad pilvepĂ”hised vĂ€rkvĂ”rgu (asjade interneti) sĂŒsteemid tuginevad protsesside halduseks kaugel asuvatel andmekeskustel, mis toob endaga kaasa latentsusprobleeme. Vastusena sellele probleemile on varem vĂ€lja pakutud servaarvutuse lĂ€henemine, kus arvutused viiakse lĂ€bi asjade interneti sĂŒsteemi vĂ”rgule fĂŒĂŒsiliselt lĂ€hemal. Mitmete servaarvutuse metoodikate seas on uduarvutus lĂ€henemine, kus rĂ”hk on arvutuste liigutamisel vĂ€rkvĂ”rgu seadmetele endile. Ehkki uduarvutusel pĂ”hinev arhitektuur on paljutĂ”otav, tĂ”statab see kĂŒsimuse – kuidas vĂ€rkvĂ”rgu protsessihaldussĂŒsteemid (BPMS4IoT-sĂŒsteemid) Ă€riprotsesse heterogeensetele vĂ€rkvĂ”rgu seadmetele jaotama peaksid? Levinud on lĂ€henemine, kus protsesside töövooĂŒlesannete kĂ€ituseks tuginetakse ĂŒhisele platvormile. NĂ€iteks, kui haldusserver defineerib teatud töövoo ĂŒlesandena Pythoni skripti ja mÀÀrab selle seadmele, siis peab seadme töövookĂ€itusmootor toetama vastavat mehhanismi skriptide jooksutamiseks. Selline nĂ”ue ei ole paindlik, arvestades vĂ€rkvĂ”rgu seadmete heterogeensust. KĂ€esolevas magistritöös pakub autor vĂ€lja raamistiku, mis eraldab töövoo ĂŒlesannete kĂ€itusmeetodi kĂ€itusmootorist kasutades selleks konteinertehnoloogiat. Töö kĂ€igus arendati vĂ€lja raamistiku prototĂŒĂŒp ning viidi lĂ€bi katseid mikroarvutitel pĂ”hinevail seadmetel. Lisaks vĂ”rreldi vĂ€ljapakutud uduarvutuse raamistiku jĂ”udlust pilvearvutusel pĂ”hineva sĂŒsteemiga.Emerging cloud-centric Internet of Things (IoT) system relies on distant data centers to manage the entire processes, which raises the issue of latency. To address the issue, researchers have introduced the Edge computing methodologies that carry out computation closer to the edge network of IoT system. Among the numerous Edge computing approaches, Mist computing paradigm emphasises the mechanism that moves the computation further to the front-end IoT devices. Although the architecture of Mist computing is promising, it raises a new challenge in how the Business Process Management System for IoT (BPMS4IoT) distributes the business process workflow to the heterogeneous IoT devices? In general, executing business process workflows relies on the common platform for executing customized tasks. For example, if the management server defines a Python script task in a workflow, which has been allocated to an IoT device, the workflow engine of the IoT device must have the compatible execution method. Such a requirement is less flexible when one considers the heterogeneity of the IoT devices. Therefore, in this thesis, the author proposes a framework to decouple the workflow task execution method from the workflow engines using the containerization technology. A proof-of-concept prototype has been developed and has been tested on several single-board computers-based IoT devices. Further, a case study has been performed to demonstrate the performance of the proposed framework comparing to the cloud-centric system

    Device discovery in D2D communication: A survey

    Get PDF
    Device to Device (D2D) communication was first considered in out-band to manage energy issues in the wireless sensor networks. The primary target was to secure information about system topology for successive communication. Now the D2D communication has been legitimated in in-band by the 3rd Generation Partnership Project (3GPP). To initiate D2D communication, Device Discovery (DD) is a primary task and every D2D application benefits from DD as an end to end link maintenance and data relay when the direct path is obstructed. The DD is facing new difficulties because of the mobility of the devices over static systems, and the mobility makes it more challenging for D2D communication. For in-band D2D, DD in a single cell and multi-cell, and dense area is not legitimated properly, causing latency, inaccuracy, and energy consumption. Among extensive studies on limiting energy consumption and latency, DD is one of the essential parts concentrating on access and communication. In this paper, a comprehensive survey on DD challenges, for example single cell/multi-cell and dense area DD, energy consumption during discovery, discovery delay, and discovery security, etc., has been presented to accomplish an effective paradigm of D2D networks. In order to undertake the device (user) needs, an architecture has been projected, which promises to overwhelm the various implementation challenges of DD. The paper mainly focuses on DD taxonomy and classification with an emphasis on discovery procedures and algorithms, a summary of advances and issues, and ways for potential enhancements. For ensuring a secure DD and D2D, auspicious research directions have been proposed, based on taxonomy

    A comprehensive survey on Fog Computing: State-of-the-art and research challenges

    Get PDF
    Cloud computing with its three key facets (i.e., Infrastructure-as-a-Service, Platform-as-a-Service, and Softwareas- a-Service) and its inherent advantages (e.g., elasticity and scalability) still faces several challenges. The distance between the cloud and the end devices might be an issue for latencysensitive applications such as disaster management and content delivery applications. Service level agreements (SLAs) may also impose processing at locations where the cloud provider does not have data centers. Fog computing is a novel paradigm to address such issues. It enables provisioning resources and services outside the cloud, at the edge of the network, closer to end devices, or eventually, at locations stipulated by SLAs. Fog computing is not a substitute for cloud computing but a powerful complement. It enables processing at the edge while still offering the possibility to interact with the cloud. This paper presents a comprehensive survey on fog computing. It critically reviews the state of the art in the light of a concise set of evaluation criteria. We cover both the architectures and the algorithms that make fog systems. Challenges and research directions are also introduced. In addition, the lessons learned are reviewed and the prospects are discussed in terms of the key role fog is likely to play in emerging technologies such as tactile Internet

    Internet of Things and data mining: from applications to techniques and systems

    Get PDF
    The Internet of Things (IoT) is the result of the convergence of sensing, computing, and networking technologies, allowing devices of varying sizes and computational capabilities (things) to intercommunicate. This communication can be achieved locally enabling what is known as edge and fog computing, or through the well‐established Internet infrastructure, exploiting the computational resources in the cloud. The IoT paradigm enables a new breed of applications in various areas including health care, energy management and smart cities. This paper starts off with reviewing these applications and their potential benefits. Challenges facing the realization of such applications are then discussed. The sheer amount of data stemmed from devices forming the IoT requires new data mining systems and techniques that are discussed and categorized later in this paper. Finally, the paper is concluded with future research directions
    corecore