593 research outputs found

    A Framework for Energy-efficient Mobile Cloud Offloading

    Get PDF
    Esilekerkivad nutitelefonide tehnoloogiad on kogenud geomeetrilist kasvu ja on praegu veel tĂ”usuteel. Inimesed kasutavad nutitelefone oma igapĂ€evastes tegevustes nagu e-maili saatmine, fotode ja videode jagamine lĂ€bi erinevate peer-to-peersotsiaalvĂ”rgustiku jaoturite ja nii edasi. Viimastel aastatel on nutitelefonid kogenud suuri tehnoloogilisi edusamme ja innovatsiooni seoses töötlusvĂ”imekusega ja saab nĂŒĂŒd kasutada keerukate ja ressursimahukate ĂŒlesannete tĂ€itmiseks rakendustes, nĂ€iteks videode monteerimine ja töötlemine ning objekti Ă€ratundmine. Kuigi enamus nutitelefone on oluliselt tĂ€iustatud, et hakkama saada suurendatud rakendustega, millel on keerukad arvutusvajadused, piiravad neid ikkagi nende energiavarud, nĂ€iteks aku kestvus. Akutehnoloogia ei ole arenenud nii kiirelt kui teised nutitelefoni valdkonnad ja seega arvutusintensiivsete ĂŒlesannete lĂ€biviimine pĂ”hjustaks selle kiire kahanemise; tĂ”estuseks vajadus pidevalt laadida seadme akut. Mitmeid meetodeid on pakutud vĂ€lja energiasÀÀstu maksimeerimiseks mobiilsetel seadmetel. MĂ”ned neist aeglustavad keskprotsessor vĂ”i lĂŒlitavad ekraani vĂ€lja, kui on tegevusetud. Nende hulgast kĂ”ige mĂ€rkimisvÀÀrsem tehnika nutitelefoni energia sÀÀstmiseks on arvutusvĂ”imsuse koormuse jaotamine. See hĂ”lmab teatud ĂŒlesannete töötluse ĂŒleviimist piiratud ressurssidega nutitelefonist kaugesse ressursirikkasse seadmesse hĂ”lbustades seega nutitelefoni energia tarbimist. See on kĂŒllaltki lai uurimisvaldkond ja on hulganisti panustatud selle ala arendamiseks. Sellele vaatamata on veel palju tööd vaja teha seoses energia sÀÀstmisega lĂ€bi arvutusvĂ”imsuse koormuse jaotamise korduva ressursimahuka töötlemise ajal. Selles teadusuuringus on me eesmĂ€rk vĂ€hendada energia tarbimist korduva energiamahuka töötlemise ajal. Me arvestame konteksti teadlikkust pakkudes vĂ€lja plaanuri mudelit, mis saaks vĂ€hendada mobiilse seadme energia kiiret vĂ€henemist seega saavutades meie eesmĂ€rgi. Pakume teenusele orienteeritud raamistikku eesmĂ€rgiga vĂ”imaldada energiatĂ”husa ĂŒlesande tĂ€itmist mobiilsel seadmel plaanuri kĂ€itumisalgoritmi abil. Me arendame kontseptsiooni tĂ”estuse prototĂŒĂŒpi Android seadmel, et demonstreerida ja hinnata raamistiku energiasÀÀstu vĂ”imekust.Emerging smartphone technologies has experienced a geometric increase and is currently still on the rise. People use the smartphone for their day-to-day activities such as sending emails, sharing photos and videos through various peer-to-peer social network hubs and so on. In the last few years, the smartphone has experienced massive technological advancements and innovation with respect to its processing capabilities and can now be used to perform complex, resource-intensive tasks in advanced applications like video editing and processing, and object recognition. Although most smartphones have been greatly augmented to handle advanced applications with complex computational needs, they are still limited in terms of their energy resources i.e. battery life. Battery technology has not evolved as rapidly as other areas of the smartphone and so the execution of computational-intensive tasks would cause its rapid depletion; evidenced by the need to constantly charge the device battery. Many techniques have been proffered to maximize energy conservation on mobile devices. Some of which are slowing down the CPU, or shutting off the screen when idle. Among these, the most notable technique for conserving smartphone energy is computation offloading. This basically involves the transfer of the processing of certain tasks from a resource-constrained smartphone to a remote, resource-rich device thereby facilitating energy conservation on the smartphone. This is a fairly large research area and numerous contributions have been made towards advancement in this field. However, much work is yet to be done with regards to energy conservation through offloading during recurrent resource-intensive processing. In this research study we aim to reduce energy consumption during continuous, energy-intensive processing. We consider context-awareness in proposing a scheduling model that could potentially minimize the speedy depletion of mobile device energy thus achieving our aim. We propose a service-oriented framework towards enabling energy-optimal task execution through a task scheduling offload algorithm. We develop a proof-of-concept prototype on an Android device to demonstrate and evaluate the framework’s energy conserving capabilities

    Multisite adaptive computation offloading for mobile cloud applications

    Get PDF
    The sheer amount of mobile devices and their fast adaptability have contributed to the proliferation of modern advanced mobile applications. These applications have characteristics such as latency-critical and demand high availability. Also, these kinds of applications often require intensive computation resources and excessive energy consumption for processing, a mobile device has limited computation and energy capacity because of the physical size constraints. The heterogeneous mobile cloud environment consists of different computing resources such as remote cloud servers in faraway data centres, cloudlets whose goal is to bring the cloud closer to the users, and nearby mobile devices that can be utilised to offload mobile tasks. Heterogeneity in mobile devices and the different sites include software, hardware, and technology variations. Resource-constrained mobile devices can leverage the shared resource environment to offload their intensive tasks to conserve battery life and improve the overall application performance. However, with such a loosely coupled and mobile device dominating network, new challenges and problems such as how to seamlessly leverage mobile devices with all the offloading sites, how to simplify deploying runtime environment for serving offloading requests from mobile devices, how to identify which parts of the mobile application to offload and how to decide whether to offload them and how to select the most optimal candidate offloading site among others. To overcome the aforementioned challenges, this research work contributes the design and implementation of MAMoC, a loosely coupled end-to-end mobile computation offloading framework. Mobile applications can be adapted to the client library of the framework while the server components are deployed to the offloading sites for serving offloading requests. The evaluation of the offloading decision engine demonstrates the viability of the proposed solution for managing seamless and transparent offloading in distributed and dynamic mobile cloud environments. All the implemented components of this work are publicly available at the following URL: https://github.com/mamoc-repo

    Improving efficiency, scalability and efficacy of adaptive computation offloading in pervasive computing environments

    Get PDF
    As computing becomes more mobile and pervasive, there is a growing demand for increasingly rich, and therefore more computationally heavy, applications to run in mobile spaces. However, there exists a disparity between mobile platforms and the desktop environments upon which computationally heavy applications have traditionally run, which is likely to persist as both domains evolve at a competing pace. Consequently, an active research area is Adaptive Computation Offloading or cyber foraging that dynamically distributes application functionality to available peer devices according to resource availability and application behaviour. Integral to any offloading strategy is an adaptive decision making algorithm that computes the optimal placement of application components to remote devices based on changing environmental context. As this decision is typically computed by constrained devices and may occur frequently in dynamic environments, such algorithms should be both resource efficient and yield efficacious adaptation results. However, existing adaptive offloading approaches incur a number of overheads, which limit their applicability in mobile and pervasive spaces. This thesis is concerned with improving upon these limitations by specifically focusing on the efficiency, scalability and efficacy aspects of two major sub processes of adaptation: 1) Adaptive Candidate Device Selection and 2) Adaptive Object Topology Computation. To this end, three novel approaches are proposed. Firstly, a distributed approach to candidate device selection, which reduces the need to communicate collaboration metrics, and allows for the partial distribution of adaptation decision-making, is proposed. The approach is shown to reduce network consumption by over 90% and power consumption by as much as 96%, while maintaining linear memory complexity in contrast to the quadratic complexity of an existing approach. Hence, the approach presents a more efficient and scalable alternative for candidate device selection in mobile and pervasive environments. Secondly, with regards to the efficacy of adaptive object topology computation, a new type of adaptation granularity that combines the efficacy of fine-grained adaptation with the efficiency of coarse level approaches is proposed. The approach is shown to improve the efficacy of adaptation decisions by reducing network overheads by a minimum of 17% to as much 99%, while maintaining comparable decision making efficiency to coarse level adaptation. Thirdly, with regards to efficiency and scalability of object topology computation, a novel distributed approach to computing adaptation decisions is proposed, in which each device maintains a distributed local application sub-graph, consisting only of components in its own memory space. The approach is shown to reduce network cost by 100%, collaboration-wide memory cost by between 37% and 50%, battery usage by between 63% and 93%, and adaptation time by between 19% and 98%. Lastly, since improving the utility of adaptation in mobile and pervasive environments requires the simultaneous improvement of its sub processes, an adaptation engine, which consolidates the individual approaches presented above, is proposed. The consolidated adaptation engine is shown to improve the overall efficiency, scalability and efficacy of adaptation under a varying range of environmental conditions, which simulate dynamic and heterogeneous mobile environments

    AdaMEC: Towards a Context-Adaptive and Dynamically-Combinable DNN Deployment Framework for Mobile Edge Computing

    Full text link
    With the rapid development of deep learning, recent research on intelligent and interactive mobile applications (e.g., health monitoring, speech recognition) has attracted extensive attention. And these applications necessitate the mobile edge computing scheme, i.e., offloading partial computation from mobile devices to edge devices for inference acceleration and transmission load reduction. The current practices have relied on collaborative DNN partition and offloading to satisfy the predefined latency requirements, which is intractable to adapt to the dynamic deployment context at runtime. AdaMEC, a context-adaptive and dynamically-combinable DNN deployment framework is proposed to meet these requirements for mobile edge computing, which consists of three novel techniques. First, once-for-all DNN pre-partition divides DNN at the primitive operator level and stores partitioned modules into executable files, defined as pre-partitioned DNN atoms. Second, context-adaptive DNN atom combination and offloading introduces a graph-based decision algorithm to quickly search the suitable combination of atoms and adaptively make the offloading plan under dynamic deployment contexts. Third, runtime latency predictor provides timely latency feedback for DNN deployment considering both DNN configurations and dynamic contexts. Extensive experiments demonstrate that AdaMEC outperforms state-of-the-art baselines in terms of latency reduction by up to 62.14% and average memory saving by 55.21%

    From reactive to proactive load balancing for task‐based parallel applications in distributed memory machines

    Get PDF
    Load balancing is often a challenge in task-parallel applications. The balancing problems are divided into static and dynamic. “Static” means that we have some prior knowledge about load information and perform balancing before execution, while “dynamic” must rely on partial information of the execution status to balance the load at runtime. Conventionally, work stealing is a practical approach used in almost all shared memory systems. In distributed memory systems, the communication overhead can make stealing tasks too late. To improve, people have proposed a reactive approach to relax communication in balancing load. The approach leaves one dedicated thread per process to monitor the queue status and offload tasks reactively from a slow to a fast process. However, reactive decisions might be mistaken in high imbalance cases. First, this article proposes a performance model to analyze reactive balancing behaviors and understand the bound leading to incorrect decisions. Second, we introduce a proactive approach to improve further balancing tasks at runtime. The approach exploits task-based programming models with a dedicated thread as well, namely . Nevertheless, the main idea is to force not only to monitor load; it will characterize tasks and train load prediction models by online learning. “Proactive” indicates offloading tasks before each execution phase proactively with an appropriate number of tasks at once to a potential victim (denoted by an underloaded/fast process). The experimental results confirm speedup improvements from to in important use cases compared to the previous solutions. Furthermore, this approach can support co-scheduling tasks across multiple applications

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Taking advantage of hybrid systems for sparse direct solvers via task-based runtimes

    Get PDF
    The ongoing hardware evolution exhibits an escalation in the number, as well as in the heterogeneity, of computing resources. The pressure to maintain reasonable levels of performance and portability forces application developers to leave the traditional programming paradigms and explore alternative solutions. PaStiX is a parallel sparse direct solver, based on a dynamic scheduler for modern hierarchical manycore architectures. In this paper, we study the benefits and limits of replacing the highly specialized internal scheduler of the PaStiX solver with two generic runtime systems: PaRSEC and StarPU. The tasks graph of the factorization step is made available to the two runtimes, providing them the opportunity to process and optimize its traversal in order to maximize the algorithm efficiency for the targeted hardware platform. A comparative study of the performance of the PaStiX solver on top of its native internal scheduler, PaRSEC, and StarPU frameworks, on different execution environments, is performed. The analysis highlights that these generic task-based runtimes achieve comparable results to the application-optimized embedded scheduler on homogeneous platforms. Furthermore, they are able to significantly speed up the solver on heterogeneous environments by taking advantage of the accelerators while hiding the complexity of their efficient manipulation from the programmer.Comment: Heterogeneity in Computing Workshop (2014
    • 

    corecore