6 research outputs found

    SCCOF: smart cooperative computation offloading framework for mobile cloud computing services

    Get PDF
    Virtual reality games and image processing Apps are examples of mobile cloud computing services (MCCS) common on Smartphones (SPs) nowadays, requiring intensive processing and/or wireless networking. The consequences are slow execution and huge battery consumption. Offloading the intensive computations of such Apps to a cloud based server can overcome such consequences. However, such offloading will introduce time delay and communication overheads. This paper proposes to do the offloading to nearby computing resources in a cooperative computation sharing network via short-range wireless connectivity. The proposed SCCOF reduces offloading response time and energy consumption overheads. SCCOF is supported by an intelligent cloud located controller that will form the cooperative resource sharing network on the go when needed, based on available devices in the vicinity, and will use the cloud if necessary. Upon the initiation of the MCCS service via the App, our controller will devise the offloaded VMs as well as the offloading network. A study test scenario was performed to evaluate the performance of SCCOF, resulting in saving of up to 16.2x in execution time and 57.25% energy

    Mobility-aware fog computing in dynamic networks with mobile nodes: A survey

    Get PDF
    Fog computing is an evolving paradigm that addresses the latency-oriented performance and spatio-temporal issues of the cloud services by providing an extension to the cloud computing and storage services in the vicinity of the service requester. In dynamic networks, where both the mobile fog nodes and the end users exhibit time-varying characteristics, including dynamic network topology changes, there is a need of mobility-aware fog computing, which is very challenging due to various dynamisms, and yet systematically uncovered. This paper presents a comprehensive survey on the fog computing compliant with the OpenFog (IEEE 1934) standardised concept, where the mobility of fog nodes constitutes an integral part. A review of the state-of-the-art research in fog computing implemented with mobile nodes is conducted. The review includes the identification of several models of fog computing concept established on the principles of opportunistic networking, social communities, temporal networks, and vehicular ad-hoc networks. Relevant to these models, the contributing research studies are critically examined to provide an insight into the open issues and future research directions in mobile fog computing research

    A Smart Edge Computing Resource, formed by On-the-go Networking of Cooperative Nearby Devices using an AI-Offloading Engine, to Solve Computationally Intensive Sub-tasks for Mobile Cloud Services

    Get PDF
    The latest Mobile Smart Devices (MSDs) and IoT deployments have encouraged the running of “Computation Intensive Applications/Services” onboard MSDs to help us perform on-the-go sub-tasks required by these Apps/Services such as Analysis, Banking, Navigation, Social Media, Gaming, etc. Doing this requires that the MSD have powerful processing resources to reduce execution time, high connectivity throughput to minimise latency and high-capacity battery for power consumption so to not impact the MSD availability/usability in between charges. Offloading such Apps from the host-MSD to a Cloud server does help but introduces network traffic and connectivity overhead issues, even with 5G. Offloading to an Edge server does help, but Edge servers are part of a pre-planned overall computing resource infrastructure, that is tough to predict when demands/rollout is generated by a push from the MSDs/Apps makers and pull by users. To address this issue, this research work has developed a “Smart Edge Computing Resource”, formed on-the-go by the networking of cooperative MSDs/Servers in the vicinity of the host-MSD that is running the computing-intensive App. This solution is achieved by: Developing an intelligent engine, hosted in the Cloud, for profiling “computing-intensive Apps/Services” for appropriately partitioning the overall task into suitable sub-task-chunks so to be executed on the host-MSD together/in association with other available nearby computing resources. Nearby resources can include other MSDs, PCs, iPads and local servers. This is achieved by implementing an “Edge-side Computing Resource engine” that intelligently divides the processing of Apps/Services among several MSDs in parallel. Also, a second “Cloud-side AI-engine” to recruit any available cooperative MSDs and provide the host-MSD with decisions of the best scenario to partition and offload the overall App/Services. It uses a performance scoring algorithm to schedule the sub-tasks to execute on the assisting resource device that has a powerful processor and high-capacity battery power. We built a dataset of 600 scenarios to boost up the offloading decision for further executions, using a Deep Neural Network model. Dynamically forming the on-the-go resource network between the chosen assisting resource devices and the App/Service host-MSD based on the best wireless connectivity possible between them. This is achieved by developing an Importance Priority Weighting cost estimator to calculate the overhead cost and efficiency gain of processing the sub-tasks on the available assisting devices. A local peer-to-peer connectivity protocol is used to communicate, using “Nearby API and/or Post API”. Sub-tasks are offloaded and processed among the participating devices in parallel while results are retrieved upon completion. The results show that our solution has achieved, on average, 40.2% more efficient processing time, 28.8% less battery power consumption and 33% less latency than other methods of executing the same Apps/Services

    Energy and Delay Efficient Computation Offloading Solutions for Edge Computing

    Get PDF
    This thesis collects a selective set of outcomes of a PhD course in Electronics, Telecommunications, and Information Technologies Engineering and it is focused on designing techniques to optimize computational resources in different wireless communication environments. Mobile Edge Computing (MEC) is a novel and distributed computational paradigm that has emerged to address the high users demand in 5G. In MEC, edge devices can share their resources to collaborate in terms of storage and computation. One of the computational sharing techniques is computation offloading, which brings a lot of advantages to the network edge, from lower communication, to lower energy consumption for computation. However, the communication among the devices should be managed such that the resources are exploited efficiently. To this aim, in this dissertation, computation offloading in different wireless environments with different number of users, network traffic, resource availability and devices' location are analyzed in order to optimize the resource allocation at the network edge. To better organize the dissertation, the studies are classified in four main sections. In the first section, an introduction on computational sharing technologies is given. Later, the problem of computation offloading is defined, and the challenges are introduced. In the second section, two partial offloading techniques are proposed. While in the first one, centralized and distributed architectures are proposed, in the second work, an Evolutionary Algorithm for task offloading is proposed. In the third section, the offloading problem is seen from a different perspective where the end users can harvest energy from either renewable sources of energy or through Wireless Power Transfer. In the fourth section, the MEC in vehicular environments is studied. In one work a heuristic is introduced in order to perform the computation offloading in Internet of Vehicles and in the other a learning-based approach based on bandit theory is proposed

    Designing Scalable Mechanisms for Geo-Distributed Platform Services in the Presence of Client Mobility

    Get PDF
    Situation-awareness applications require low-latency response and high network bandwidth, hence benefiting from geo-distributed Edge infrastructures. The developers of these applications typically rely on several platform services, such as Kubernetes, Apache Cassandra and Pulsar, for managing their compute and data components across the geo-distributed Edge infrastructure. Situation-awareness applications impose peculiar requirements on the compute and data placement policies of the platform services. Firstly, the processing logic of these applications is closely tied to the physical environment that it is interacting with. Hence, the access pattern to compute and data exhibits strong spatial affinity. Secondly, the network topology of Edge infrastructure is heterogeneous, wherein communication latency forms a significant portion of the end-to-end compute and data access latency. Therefore, the placement of compute and data components has to be cognizant of the spatial affinity and latency requirements of the applications. However, clients of situation-awareness applications, such as vehicles and drones, are typically mobile – making the compute and data access pattern dynamic and complicating the management of data and compute components. Constant changes in the network connectivity and spatial locality of clients due to client mobility results in making the current placement of compute and data components unsuitable for meeting the latency and spatial affinity requirements of the application. Constant client mobility necessitates that client location and latency offered by the platform services be continuously monitored to detect when application requirements are violated and to adapt the compute and data placement. The control and monitoring modules of off-the-shelf platform services do not have the necessary primitives to incorporate spatial affinity and network topology awareness into their compute and data placement policies. The spatial location of clients is not considered as an input for decision- making in their control modules. Furthermore, they do not perform fine-grained end-to-end monitoring of observed latency to detect and adapt to performance degradations due to client mobility. This dissertation presents three mechanisms that inform the compute and data placement policies of platform services, so that application requirements can be met. M1: Dynamic Spatial Context Management for system entities – clients and data and compute components – to ensure spatial affinity requirements are satisfied. M2: Network Proximity Estimation to provide topology-awareness to the data and compute placement policies of platform services. M3: End-to-End Latency Monitoring to enable collection, aggregation and analysis of per-application metrics in a geo-distributed manner to provide end-to-end insights into application performance. The thesis of our work is that the aforementioned mechanisms are fundamental building blocks for the compute and data management policies of platform services, and that by incorporating them, platform services can meet application requirements at the Edge. Furthermore, the proposed mechanisms can be implemented in a way that offers high scalability to handle high levels of client activity. We demonstrate by construction the efficacy and scalability of the proposed mechanisms for building dynamic compute and data orchestration policies by incorporating them in the control and monitoring modules of three different platform services. Specifically, we incorporate these mechanisms into a topic-based publish-subscribe system (ePulsar), an application orchestration platform (OneEdge), and a key-value store (FogStore). We conduct extensive performance evaluation of these enhanced platform services to showcase how the new mechanisms aid in dynamically adapting the compute/data orchestration decisions to satisfy performance requirements of applicationsPh.D
    corecore