85,262 research outputs found

    Multi-User Computation Partitioning for Latency Sensitive Mobile Cloud Applications

    Get PDF
    Elastic partitioning of computations between mobile devices and cloud is an important and challenging research topic for mobile cloud computing. Existing works focus on the single-user computation partitioning, which aims to optimize the application completion time for one particular single user. These works assume that the cloud always has enough resources to execute the computations immediately when they are offloaded to the cloud. However, this assumption does not hold for large scale mobile cloud applications. In these applications, due to the competition for cloud resources among a large number of users, the offloaded computations may be executed with certain scheduling delay on the cloud. Single user partitioning that does not take into account the scheduling delay on the cloud may yield significant performance degradation. In this paper, we study, for the first time, Multi-user Computation Partitioning Problem (MCPP), which considers the partitioning of multiple usersā€™ computations together with the scheduling of offloaded computations on the cloud resources. Instead of pursuing the minimum application completion time for every single user, we aim to achieve minimum average completion time for all the users, based on the number of provisioned resources on the cloud. We show that MCPP is different from and more difficult than the classical job scheduling problems. We design an offline heuristic algorithm, namely SearchAdjust, to solve MCPP. We demonstrate through benchmarks that SearchAdjust outperforms both the single user partitioning approaches and classical job scheduling approaches by 10% on average in terms of application delay. Based on SearchAdjust, we also design an online algorithm for MCPP that can be easily deployed in practical systems. We validate the effectiveness of our online algorithm using real world load traces. Index Termsā€”mobile cloud computing; offloading; computation partitioning; job schedulin

    TUCANA: A platform for using local processing power of edge devices for building data-driven services

    Get PDF
    In the age of mobile cloud computing web-based systems are often designed to transfer data to large scaling online storage facilities in order to persistently save and analyze it with complex algorithms such as used in machine learning. These systems often require a reliable network connection, which does not hold for a variety of mobile business applications. As an alternative to traditional cloud-based systems the TUCANA approach makes use of the local processing power of mobile edge devices in order to come up with high complex AI pipelines processing data in real-time. By applying the idea of TUCANA to our service use-case called ā€œnPotatoā€ we developed an artificial, nociceptive potato that frequently measures and analyses acceleration data during the harvesting process of potatoes. In the given scenario sensory data is processed locally in real-time using the deviceā€™s local computing power to gain higher productivity in the area of precision farming

    Multi-user Computation Partitioning for Latency Sensitive Mobile Cloud Applications

    Get PDF
    Elastic partitioning of computations between mobile devices and cloud is an important and challenging research topic for mobile cloud computing. Existing works focus on the single-user computation partitioning, which aims to optimize the application completion time for one particular single user. These works assume that the cloud always has enough resources to execute the computations immediately when they are offloaded to the cloud. However, this assumption does not hold for large scale mobile cloud applications. In these applications, due to the competition for cloud resources among a large number of users, the offloaded computations may be executed with certain scheduling delay on the cloud. Single user partitioning that does not take into account the scheduling delay on the cloud may yield significant performance degradation. In this paper, we study, for the first time, Multi-user Computation Partitioning Problem (MCPP), which considers the partitioning of multiple usersā€™ computations together with the scheduling of offloaded computations on the cloud resources. Instead of pursuing the minimum application completion time for every single user, we aim to achieve minimum average completion time for all the users, based on the number of provisioned resources on the cloud. We show that MCPP is different from and more difficult than the classical job scheduling problems. We design an offline heuristic algorithm, namely SearchAdjust, to solve MCPP. We demonstrate through benchmarks that SearchAdjust outperforms both the single user partitioning approaches and classical job scheduling approaches by 10% on average in terms of application delay. Based on SearchAdjust, we also design an online algorithm for MCPP that can be easily deployed in practical systems. We validate the effectiveness of our online algorithm using real world load traces. Index Termsā€”mobile cloud computing; offloading; computation partitioning; job schedulin

    Online placement of multi-component applications in edge computing environments

    Get PDF
    Mobile edge computing is a new cloud computing paradigm which makes use of small-sized edge-clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the coexistence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared to the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands/availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement.We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes
    • ā€¦
    corecore