5 research outputs found

    A Framework for Energy-efficient Mobile Cloud Offloading

    Get PDF
    Esilekerkivad nutitelefonide tehnoloogiad on kogenud geomeetrilist kasvu ja on praegu veel tõusuteel. Inimesed kasutavad nutitelefone oma igapäevastes tegevustes nagu e-maili saatmine, fotode ja videode jagamine läbi erinevate peer-to-peersotsiaalvõrgustiku jaoturite ja nii edasi. Viimastel aastatel on nutitelefonid kogenud suuri tehnoloogilisi edusamme ja innovatsiooni seoses töötlusvõimekusega ja saab nüüd kasutada keerukate ja ressursimahukate ülesannete täitmiseks rakendustes, näiteks videode monteerimine ja töötlemine ning objekti äratundmine. Kuigi enamus nutitelefone on oluliselt täiustatud, et hakkama saada suurendatud rakendustega, millel on keerukad arvutusvajadused, piiravad neid ikkagi nende energiavarud, näiteks aku kestvus. Akutehnoloogia ei ole arenenud nii kiirelt kui teised nutitelefoni valdkonnad ja seega arvutusintensiivsete ülesannete läbiviimine põhjustaks selle kiire kahanemise; tõestuseks vajadus pidevalt laadida seadme akut. Mitmeid meetodeid on pakutud välja energiasäästu maksimeerimiseks mobiilsetel seadmetel. Mõned neist aeglustavad keskprotsessor või lülitavad ekraani välja, kui on tegevusetud. Nende hulgast kõige märkimisväärsem tehnika nutitelefoni energia säästmiseks on arvutusvõimsuse koormuse jaotamine. See hõlmab teatud ülesannete töötluse üleviimist piiratud ressurssidega nutitelefonist kaugesse ressursirikkasse seadmesse hõlbustades seega nutitelefoni energia tarbimist. See on küllaltki lai uurimisvaldkond ja on hulganisti panustatud selle ala arendamiseks. Sellele vaatamata on veel palju tööd vaja teha seoses energia säästmisega läbi arvutusvõimsuse koormuse jaotamise korduva ressursimahuka töötlemise ajal. Selles teadusuuringus on me eesmärk vähendada energia tarbimist korduva energiamahuka töötlemise ajal. Me arvestame konteksti teadlikkust pakkudes välja plaanuri mudelit, mis saaks vähendada mobiilse seadme energia kiiret vähenemist seega saavutades meie eesmärgi. Pakume teenusele orienteeritud raamistikku eesmärgiga võimaldada energiatõhusa ülesande täitmist mobiilsel seadmel plaanuri käitumisalgoritmi abil. Me arendame kontseptsiooni tõestuse prototüüpi Android seadmel, et demonstreerida ja hinnata raamistiku energiasäästu võimekust.Emerging smartphone technologies has experienced a geometric increase and is currently still on the rise. People use the smartphone for their day-to-day activities such as sending emails, sharing photos and videos through various peer-to-peer social network hubs and so on. In the last few years, the smartphone has experienced massive technological advancements and innovation with respect to its processing capabilities and can now be used to perform complex, resource-intensive tasks in advanced applications like video editing and processing, and object recognition. Although most smartphones have been greatly augmented to handle advanced applications with complex computational needs, they are still limited in terms of their energy resources i.e. battery life. Battery technology has not evolved as rapidly as other areas of the smartphone and so the execution of computational-intensive tasks would cause its rapid depletion; evidenced by the need to constantly charge the device battery. Many techniques have been proffered to maximize energy conservation on mobile devices. Some of which are slowing down the CPU, or shutting off the screen when idle. Among these, the most notable technique for conserving smartphone energy is computation offloading. This basically involves the transfer of the processing of certain tasks from a resource-constrained smartphone to a remote, resource-rich device thereby facilitating energy conservation on the smartphone. This is a fairly large research area and numerous contributions have been made towards advancement in this field. However, much work is yet to be done with regards to energy conservation through offloading during recurrent resource-intensive processing. In this research study we aim to reduce energy consumption during continuous, energy-intensive processing. We consider context-awareness in proposing a scheduling model that could potentially minimize the speedy depletion of mobile device energy thus achieving our aim. We propose a service-oriented framework towards enabling energy-optimal task execution through a task scheduling offload algorithm. We develop a proof-of-concept prototype on an Android device to demonstrate and evaluate the framework’s energy conserving capabilities

    Code offloading on real-time multimedia systems: a framework for handling code mobility and code offloading in a QoS aware environment

    Get PDF
    Actualmente, os smartphones e outros dispositivos móveis têm vindo a ser dotados com cada vez maior poder computacional, sendo capazes de executar um vasto conjunto de aplicações desde simples programas de para tirar notas até sofisticados programas de navegação. Porém, mesmo com a evolução do seu hardware, os actuais dispositivos móveis ainda não possuem as mesmas capacidades que os computadores de mesa ou portáteis. Uma possível solução para este problema é distribuir a aplicação, executando partes dela no dispositivo local e o resto em outros dispositivos ligados à rede. Adicionalmente, alguns tipos de aplicações como aplicações multimédia, jogos electrónicos ou aplicações de ambiente imersivos possuem requisitos em termos de Qualidade de Serviço, particularmente de tempo real. Ao longo desta tese é proposto um sistema de execução de código remota para sistemas distribuídos com restrições de tempo-real. A arquitectura proposta adapta-se a sistemas que necessitem de executar periodicamente e em paralelo mesmo conjunto de funções com garantias de tempo real, mesmo desconhecendo os tempos de execução das referidas funções. A plataforma proposta foi desenvolvida para sistemas móveis capazes de executar o Sistema Operativo Android.Smartphones and other mobile devices are becoming more powerful and are capable of executing several applications in a concurrent manner. Although the hardware capabilities of mobile devices are increasing in an unprecedented way, they still do not possess the same features and resources of a common desktop or laptop PC. A potential solution for this limitation might be to distribute an application by running some of its parts locally while running the remaining parts on other devices. Additionally, there are several types of applications in domains such as multimedia, gaming or immersive environments that require soft real-time constraints which have to be guaranteed. In this work we are targeting highly dynamic distributed systems with Quality of Service (QoS) constraints, where the traditional models of computation are not sufficient to handle the users’ or applications’ requests. Therefore, new models of computation are needed to overcome the above limitations in order to satisfy the applications’ or users’ requirements. Code offloading techniques allied with resource management seem very promising as each node may use neighbour nodes to request for help in order to perform demanding computations that cannot be done locally. In this demanding context, a full-fledged framework was developed with the objective of integrating code offloading techniques on top of a middleware framework that provides QoS and real-time guarantees to the applications. This paper describes the implementation of the above-mentioned framework in the Android platform as well as a proof-of-concept application to demonstrate the most important concepts of code offloading, QoS and real-time scheduling

    Improving efficiency, scalability and efficacy of adaptive computation offloading in pervasive computing environments

    Get PDF
    As computing becomes more mobile and pervasive, there is a growing demand for increasingly rich, and therefore more computationally heavy, applications to run in mobile spaces. However, there exists a disparity between mobile platforms and the desktop environments upon which computationally heavy applications have traditionally run, which is likely to persist as both domains evolve at a competing pace. Consequently, an active research area is Adaptive Computation Offloading or cyber foraging that dynamically distributes application functionality to available peer devices according to resource availability and application behaviour. Integral to any offloading strategy is an adaptive decision making algorithm that computes the optimal placement of application components to remote devices based on changing environmental context. As this decision is typically computed by constrained devices and may occur frequently in dynamic environments, such algorithms should be both resource efficient and yield efficacious adaptation results. However, existing adaptive offloading approaches incur a number of overheads, which limit their applicability in mobile and pervasive spaces. This thesis is concerned with improving upon these limitations by specifically focusing on the efficiency, scalability and efficacy aspects of two major sub processes of adaptation: 1) Adaptive Candidate Device Selection and 2) Adaptive Object Topology Computation. To this end, three novel approaches are proposed. Firstly, a distributed approach to candidate device selection, which reduces the need to communicate collaboration metrics, and allows for the partial distribution of adaptation decision-making, is proposed. The approach is shown to reduce network consumption by over 90% and power consumption by as much as 96%, while maintaining linear memory complexity in contrast to the quadratic complexity of an existing approach. Hence, the approach presents a more efficient and scalable alternative for candidate device selection in mobile and pervasive environments. Secondly, with regards to the efficacy of adaptive object topology computation, a new type of adaptation granularity that combines the efficacy of fine-grained adaptation with the efficiency of coarse level approaches is proposed. The approach is shown to improve the efficacy of adaptation decisions by reducing network overheads by a minimum of 17% to as much 99%, while maintaining comparable decision making efficiency to coarse level adaptation. Thirdly, with regards to efficiency and scalability of object topology computation, a novel distributed approach to computing adaptation decisions is proposed, in which each device maintains a distributed local application sub-graph, consisting only of components in its own memory space. The approach is shown to reduce network cost by 100%, collaboration-wide memory cost by between 37% and 50%, battery usage by between 63% and 93%, and adaptation time by between 19% and 98%. Lastly, since improving the utility of adaptation in mobile and pervasive environments requires the simultaneous improvement of its sub processes, an adaptation engine, which consolidates the individual approaches presented above, is proposed. The consolidated adaptation engine is shown to improve the overall efficiency, scalability and efficacy of adaptation under a varying range of environmental conditions, which simulate dynamic and heterogeneous mobile environments

    Transparent and Dynamic Code Offloading for Java Applications

    No full text
    International audienceCode offloading is a promising effort for embedded systems and load-balancing. Embedded systems will be able to offload computation to nearby computers and large-scale applications will be able to load-balance computation during high load. This paper presents a runtime infrastructure that transparently distributes computation between interconnected workstations. Application source code is not modified: instead, dynamic aspect weaving within an extended virtual machine allows to monitor and distribute entities dynamically. Runtime policies for distribution can be dynamically adapted depending on the environment. A first evaluation of the system shows that our technique increases the transaction rate of a Web server during high load by 73%

    Transparent and adaptive application partitioning using mobile objects

    Get PDF
    The dynamic nature and heterogeneity of modern execution environments such as mobile, ubiquitous, and grid computing, present major challenges for the development and efficient execution of the applications targeted for these environments. In particular, applications tailored to run in a specific environment will show different and most likely sub-optimal behaviour when executed on a different and/or dynamic environment. Consequently, there has been growing interests in the area of application adaptation which aims to enable applications to cope with the varying execution environments. Adaptive application partitioning, a specific form of non-functional adaptation involving distribution of mobile objects across multiple host machines, is of particular interest to this thesis due to the diversity of its uses. In this approach, certain runtime information (known as context) is used to allow an object-oriented application to adaptively (re)adjust the placement of its objects during its execution, for purposes such as improving application performance and reliability as well as balancing resource utilisation across machines. Promoting the adoption of such adaptation requires a process that requires minimal human involvement in both the execution and the development of the relevant application. These challenges establish the main goals and contributions of this work, which include: 1) Proposing an effective application partitioning solution via the adoption of a decentralised adaptation strategy known as local adaptation. 2) Enabling adaptive application partitioning which does not require human intervention, through automatic collection of required information/context. 3) Proposing a solution for transparently injecting the required adaptation functionality into regular object-oriented applications allowing significant reduction of the associated development cost/effort. The proposed solutions have been implemented in a Java-based adaptation framework called MobJeX. This implementation, which was used as a test bed for the empirical experiments undertaken in this study, can be used to facilitate future research relevant to this particular study
    corecore