4 research outputs found

    Extending the battery life of mobile device by computation offloading

    Get PDF
    Doctor of PhilosophyComputing and Information SciencesDaniel A. AndresenThe need for increased performance of mobile device directly conflicts with the desire for longer battery life. Offloading computation to resourceful servers is an effective method to reduce energy consumption and enhance performance for mobile applications. Today, most mobile devices have fast wireless link such as 4G and Wi-Fi, making computation offloading a reasonable solution to extend battery life of mobile device. Android provides mechanisms for creating mobile applications but lacks a native scheduling system for determining where code should be executed. We present Jade, a system that adds sophisticated energy-aware computation offloading capabilities to Android applications. Jade monitors device and application status and automatically decides where code should be executed. Jade dynamically adjusts offloading strategy by adapting to workload variation, communication costs, and device status. Jade minimizes the burden on developers to build applications with computation offloading ability by providing easy-to-use Jade API. Evaluation shows that Jade can effectively reduce up to 37% of average power consumption for mobile device while improving application performance

    A Smart Edge Computing Resource, formed by On-the-go Networking of Cooperative Nearby Devices using an AI-Offloading Engine, to Solve Computationally Intensive Sub-tasks for Mobile Cloud Services

    Get PDF
    The latest Mobile Smart Devices (MSDs) and IoT deployments have encouraged the running of “Computation Intensive Applications/Services” onboard MSDs to help us perform on-the-go sub-tasks required by these Apps/Services such as Analysis, Banking, Navigation, Social Media, Gaming, etc. Doing this requires that the MSD have powerful processing resources to reduce execution time, high connectivity throughput to minimise latency and high-capacity battery for power consumption so to not impact the MSD availability/usability in between charges. Offloading such Apps from the host-MSD to a Cloud server does help but introduces network traffic and connectivity overhead issues, even with 5G. Offloading to an Edge server does help, but Edge servers are part of a pre-planned overall computing resource infrastructure, that is tough to predict when demands/rollout is generated by a push from the MSDs/Apps makers and pull by users. To address this issue, this research work has developed a “Smart Edge Computing Resource”, formed on-the-go by the networking of cooperative MSDs/Servers in the vicinity of the host-MSD that is running the computing-intensive App. This solution is achieved by: Developing an intelligent engine, hosted in the Cloud, for profiling “computing-intensive Apps/Services” for appropriately partitioning the overall task into suitable sub-task-chunks so to be executed on the host-MSD together/in association with other available nearby computing resources. Nearby resources can include other MSDs, PCs, iPads and local servers. This is achieved by implementing an “Edge-side Computing Resource engine” that intelligently divides the processing of Apps/Services among several MSDs in parallel. Also, a second “Cloud-side AI-engine” to recruit any available cooperative MSDs and provide the host-MSD with decisions of the best scenario to partition and offload the overall App/Services. It uses a performance scoring algorithm to schedule the sub-tasks to execute on the assisting resource device that has a powerful processor and high-capacity battery power. We built a dataset of 600 scenarios to boost up the offloading decision for further executions, using a Deep Neural Network model. Dynamically forming the on-the-go resource network between the chosen assisting resource devices and the App/Service host-MSD based on the best wireless connectivity possible between them. This is achieved by developing an Importance Priority Weighting cost estimator to calculate the overhead cost and efficiency gain of processing the sub-tasks on the available assisting devices. A local peer-to-peer connectivity protocol is used to communicate, using “Nearby API and/or Post API”. Sub-tasks are offloaded and processed among the participating devices in parallel while results are retrieved upon completion. The results show that our solution has achieved, on average, 40.2% more efficient processing time, 28.8% less battery power consumption and 33% less latency than other methods of executing the same Apps/Services
    corecore