8 research outputs found

    Mobile Cloud Computing Architecture Model for Multi-Tasks Offloading

    Get PDF
    In modern era the cell phones has born through the significant technological advancements. But this resides a low multi tasks entity. Many people use mobile devices instead of PC’s. Cell phones has limited number of resources like limited storage, battery time and processing. The cloud computing offloading deals with these limitations. Cloud computing become more attractive as it reduce the cost and also time efficient. Business of all sizes can’t afford to purchase hardware and softwares but cloud computing provide these resources and executes multiple tasks and allows the user to access their data and provide other control in each level of cloud computing.  All of these techniques save smart phones properties or capabilities but it also becomes the reasons of communication cost between cloud and smart phone devices. The main advantage of cloud computing is to provide multiple properties at different prices. These applications has goal to attain versatile performance objective. In this research work, an architecture model for multi tasks offloading designed to overcome this problem. For this purpose CloudSim simulator use with the NetBeans and implement the MCOP algorithm. This algorithm solves the execution timing issue and enhances the mobile system performance. In this tasks are partitioning into two parts and then implemented on cloud site or locally. It reduces the time response and communication cost or tasks execution cost. Keywords: Mobile Cloud Computing, Mobile Computing Offloading, Smart Mobile Devices, Optimal Partitioning Algorithm

    New cloud offloading algorithm for better energy consumption and process time

    Get PDF
    Abstract. Offloading in cloud computing is a way to execute big files in short times due to the available processing resources on core computers. However in some cases it is vital to execute the file locally on the node if the file size is less than a threshold size. There is a trade off in this issue due to the limited power of the node, therefore, in this paper a novel algorithm is proposed where the file size in each case is measured and then a decision is taken to either execute the file on the node or to send the file to be processed in the core cloud. The main reason is to save time of the execution of the file. However, the second and important reason is to save the limited node energy in some large file, where the power consumption of the node will be very high. The measurement of the file size and the execution time and the power consumption for the local node and the core cloud is measured to represent an input to the execution decision Yousef, S., Yaghi, M., Tapaswi, S., Pattanaik, K. K. and Col

    Wireless body area network mobility-aware task offloading scheme

    Get PDF
    The increasing amount of user equipment (UE) and the rapid advances in wireless body area networks bring revolutionary changes in healthcare systems. However, due to the strict requirements on size, reliability and battery lifetime of UE devices, it is difficult for them to execute latency sensitive or computation intensive tasks effectively. In this paper, we aim to enhance the UE computation capacity by utilizing small size coordinator-based mobile edge computing (C-MEC) servers. In this way, the system complexity, computation resources, and energy consumption are considerably transferred from the UE to the C-MEC, which is a practical approach since C-MEC is power charged, in contrast to the UE. First, the system architecture and the mobility model are presented. Second, several transmission mechanisms are analyzed along with the proposed mobility-aware cooperative task offloading scheme. Numerous selected performance metrics are investigated regarding the number of executed tasks, the percentage of failed tasks, average service time, and the energy consumption of each MEC. The results validate the advantage of task offloading schemes compared with the traditional relay-based technique regarding the number of executed tasks. Moreover, one can obtain that the proposed scheme archives noteworthy benefits, such as low latency and efficiently balance the energy consumption of C-MECs

    Profiling Performance of Application Partitioning for Wearable Devices in Mobile Cloud and Fog Computing

    Get PDF
    Wearable devices have become essential in our daily activities. Due to battery constrains the use of computing, communication, and storage resources is limited. Mobile Cloud Computing (MCC) and the recently emerged Fog Computing (FC) paradigms unleash unprecedented opportunities to augment capabilities of wearables devices. Partitioning mobile applications and offloading computationally heavy tasks for execution to the cloud or edge of the network is the key. Offloading prolongs lifetime of the batteries and allows wearable devices to gain access to the rich and powerful set of computing and storage resources of the cloud/edge. In this paper, we experimentally evaluate and discuss rationale of application partitioning for MCC and FC. To experiment, we develop an Android-based application and benchmark energy and execution time performance of multiple partitioning scenarios. The results unveil architectural trade-offs that exist between the paradigms and devise guidelines for proper power management of service-centric Internet of Things (IoT) applications

    VirtFogSim: A parallel toolbox for dynamic energy-delay performance testing and optimization of 5G Mobile-Fog-Cloud virtualized platforms

    Get PDF
    It is expected that the pervasive deployment of multi-tier 5G-supported Mobile-Fog-Cloudtechnological computing platforms will constitute an effective means to support the real-time execution of future Internet applications by resource- and energy-limited mobile devices. Increasing interest in this emerging networking-computing technology demands the optimization and performance evaluation of several parts of the underlying infrastructures. However, field trials are challenging due to their operational costs, and in every case, the obtained results could be difficult to repeat and customize. These emergingMobile-Fog-Cloud ecosystems still lack, indeed, customizable software tools for the performance simulation of their computing-networking building blocks. Motivated by these considerations, in this contribution, we present VirtFogSim. It is aMATLAB-supported software toolbox that allows the dynamic joint optimization and tracking of the energy and delay performance of Mobile-Fog-Cloud systems for the execution of applications described by general Directed Application Graphs (DAGs). In a nutshell, the main peculiar features of the proposed VirtFogSim toolbox are that: (i) it allows the joint dynamic energy-aware optimization of the placement of the application tasks and the allocation of the needed computing-networking resources under hard constraints on acceptable overall execution times, (ii) it allows the repeatable and customizable simulation of the resulting energy-delay performance of the overall system; (iii) it allows the dynamic tracking of the performed resource allocation under time-varying operational environments, as those typically featuring mobile applications; (iv) it is equipped with a user-friendly Graphic User Interface (GUI) that supports a number of graphic formats for data rendering, and (v) itsMATLAB code is optimized for running atop multi-core parallel execution platforms. To check both the actual optimization and scalability capabilities of the VirtFogSim toolbox, a number of experimental setups featuring different use cases and operational environments are simulated, and their performances are compared
    corecore