26 research outputs found

    Programming models for mobile environments

    Get PDF
    Premi extraordinari doctorat UPC curs 2017-2018. Àmbit d’Enginyeria de les TICFor the last decade, mobile devices have grown in popularity and became the best-selling computing devices. Despite their high capabilities for user interactions and network connectivity, the computing power of mobile devices is low and the lifetime of the application running on them limited by the battery. Mobile Cloud Computing (MCC) is a technology that tackles the limitations of mobile devices by bringing together their mobility with the vast computing power of the Cloud. Programming applications for Mobile Cloud Computing (MCC) environments is not as straightforward as coding monolithic applications. Developers have to deal with the issues related to parallel programming for distributed infrastructures while considering the battery lifetime and the variability of the network produced by the high mobility of this kind of devices. As with any other distributed environment, developers turn to programming models to improve their productivity by avoiding the complexity of manually dealing with these issues and delegate on the corresponding model all the management of these concerns. This thesis contributes to the current state of the art with an adaptation of the COMPSs programming model for MCC environments. COMPSs allows application programmers to code their applications in a sequential, infrastructure-agnostic fashion without calls to any COMPSs-specific API using the native language for the target platform as if they were to run on the mobile device. At execution time, a runtime system automatically partitions the application into tasks and orchestrates their execution on top of the available resources. This thesis contributes with an extension to the programming model to allow task polymorphism and let the runtime exploit computational resources other than the CPU of the resources. Besides, the runtime architecture has been redesigned with the characteristics of MCC in mind, and it runs as a common service which all the applications running simultaneously on the mobile device contact for submitting the execution of their tasks. For collaboratively exploiting both, local and remote resources, the runtime clusters the computational devices into Computing Platforms according to the mechanisms required to provide the processing elements with the necessary input values, launch the task execution avoiding resource oversubscription and fetching the results back from them. The CPU Platform run tasks on the cores of the CPU. The GPU Platform leverages on OpenCL to run tasks as kernels on GPUs or other accelerators embedded in the mobile device. Finally, the Cloud Platform offloads the execution of tasks onto remote resources. To holistically decide whether is worth running a task on embedded or on remote resources, the runtime considers the the costs -- time, energy and money -- of running the computation on each of the platforms and picks the best. Each platform manages internally its resources and orchestrates the execution of tasks on them using different scheduling policies. Using local and remote computing devices forces the runtime to share data values among the nodes of the infrastructure. This data is potentially privacy-sensitive, and the runtime exposes it to possible attackers when transferring it through the network. To protect the application user from data leaks, the runtime has to provide communications with secrecy, integrity and authenticity. In the extreme case of a network breakdown that isolates the mobile device from the remote nodes, the runtime has to ensure that the execution continues to provide the application user with the expected result even if the connection never re-establishes. The mobile device has to respond using only the resources embedded in it, what could incur in the re-execution of computations already ran on the remote resources. Remote workers have to continue with the execution so that, in case of reconnection, both parts synchronize its progress to reduce the impact of the disruption.Els últims anys, els dispositius mòbils han guanyat en popularitat i s'han convertit en els dispositius més venuts. Tot i la connectivitat i la bona interacció amb l'usuari que ofereixen, la seva capacitat de càlcul is baixa i limitada per la vida de la bateria. El Mobile Cloud Computing (MCC) és una tecnologia que soluciona les limitacions d'aquests dispositius ajuntant la seva mobilitat amb la gran capacitat de còmput del Cloud. Programar aplicacions per entorns MCC no és tan directe com fer aplicacions monolítiques. Els desenvolupadors han de tractar amb els problemes relacionats amb la programació paral·lela mentre tenen en compte la duració de la bateria i la variabilitat de la xarxa degut a la mobilitat inherent a aquest tipus de dispositius. Com per qualsevol altre entorn distribuït, els desenvolupadors recorren a models de programació que millorin la seva productivitat i els evitin tractar manualment amb aquests problemes delegant la seva gestió en el model. Aquesta tesis contribueix a l'estat de l'art actual amb una adaptació del model de programació COMPSs als entorns MCC. COMPSs permet als desenvolupadors programar les aplicacions de forma agnòstica a la infraestructura i seqüencial sense necessitat d'invocar cap API específica utilitzant el llenguatge natiu de la platforma com si l'aplicació s'executés directament en el mòbil. En temps d'execució, una eina (runtime) automàticament divideix l'aplicació en tasques i n'orquestra la seva execució sobre els recursos disponibles. Aquesta tesis estèn el model de programació per tal de permetre polimorfisme a nivell de tasca i deixar al runtime explotar els recursos computacionals dels que disposa el mòbil a part de la CPU. A més a més, l'arquitectura del runtime s'ha redissenyat tenint en compte les característiques pròpies del MCC, i aquest s'executa com un servei comú al que totes les aplicacions del mòbil contacten per tal d'executar les seves tasques. Per explotar col·laborativament tots els recursos, locals i remots, el runtime agrupa els recursos en Computing Platforms en funció dels mecanismes necessaris per proveir el recurs amb les dades d'entrada necessàries, llançar l'execució i recuperar-ne els resultats. La CPU Platform executa tasques en els nuclis de la CPU. La GPU Platform utilitza OpenCL per executar tasques en forma de kernels a la GPU o altres acceleradors integrats en el mòbil. Finalment, la Cloud Platform descàrrega l'execució de tasques en recursos remots. Per decidir holisticament si és millor executar una tasca en un recurs local o en un remot, el runtime considera els costs (temporal, energètic econòmic) d'executar la tasca en cada una de les plataformes i n'escull la millor. Cada plataforma gestiona internament els seus recursos i orquestra l'execució de les tasques en ells seguint diferents polítiques de planificació. L'ús de recursos locals i remots força la compartició de dades entre els nodes de la infraestructura. Aquestes dades són potencialment sensibles i de caràcter privat i el runtime les exposa a possibles atacs que les transfereix per la xarxa. Per tal de protegir l'usuari de possibles fuites de dades, el runtime ha de dotar les comunicacions amb confidencialitat, integritat i autenticitat. En el cas extrem en que un error de xarxa aïlli el dispositiu mòbil dels nodes remots, el runtime ha d'assegurar que l'execució continua i que eventualment l'usuari rebrà el resultat esperat fins i tot en cas de que la connexió no és restableixi mai. El mòbil ha de ser capaç d'executar l'aplicació utilitzant únicament les dades i recursos disponibles en aquell moment, la qual cosa pot forçar la re-execució d'algunes tasques ja calculades en els recursos remots. Els recursos remots han de continuar l'execució per tal que en cas de reconnexió, ambdues parts sincronitzin el seu progrés i es minimitzi l'impacte de la desconnexió.Award-winningPostprint (published version

    Programming models for mobile environments

    Get PDF
    For the last decade, mobile devices have grown in popularity and became the best-selling computing devices. Despite their high capabilities for user interactions and network connectivity, the computing power of mobile devices is low and the lifetime of the application running on them limited by the battery. Mobile Cloud Computing (MCC) is a technology that tackles the limitations of mobile devices by bringing together their mobility with the vast computing power of the Cloud. Programming applications for Mobile Cloud Computing (MCC) environments is not as straightforward as coding monolithic applications. Developers have to deal with the issues related to parallel programming for distributed infrastructures while considering the battery lifetime and the variability of the network produced by the high mobility of this kind of devices. As with any other distributed environment, developers turn to programming models to improve their productivity by avoiding the complexity of manually dealing with these issues and delegate on the corresponding model all the management of these concerns. This thesis contributes to the current state of the art with an adaptation of the COMPSs programming model for MCC environments. COMPSs allows application programmers to code their applications in a sequential, infrastructure-agnostic fashion without calls to any COMPSs-specific API using the native language for the target platform as if they were to run on the mobile device. At execution time, a runtime system automatically partitions the application into tasks and orchestrates their execution on top of the available resources. This thesis contributes with an extension to the programming model to allow task polymorphism and let the runtime exploit computational resources other than the CPU of the resources. Besides, the runtime architecture has been redesigned with the characteristics of MCC in mind, and it runs as a common service which all the applications running simultaneously on the mobile device contact for submitting the execution of their tasks. For collaboratively exploiting both, local and remote resources, the runtime clusters the computational devices into Computing Platforms according to the mechanisms required to provide the processing elements with the necessary input values, launch the task execution avoiding resource oversubscription and fetching the results back from them. The CPU Platform run tasks on the cores of the CPU. The GPU Platform leverages on OpenCL to run tasks as kernels on GPUs or other accelerators embedded in the mobile device. Finally, the Cloud Platform offloads the execution of tasks onto remote resources. To holistically decide whether is worth running a task on embedded or on remote resources, the runtime considers the the costs -- time, energy and money -- of running the computation on each of the platforms and picks the best. Each platform manages internally its resources and orchestrates the execution of tasks on them using different scheduling policies. Using local and remote computing devices forces the runtime to share data values among the nodes of the infrastructure. This data is potentially privacy-sensitive, and the runtime exposes it to possible attackers when transferring it through the network. To protect the application user from data leaks, the runtime has to provide communications with secrecy, integrity and authenticity. In the extreme case of a network breakdown that isolates the mobile device from the remote nodes, the runtime has to ensure that the execution continues to provide the application user with the expected result even if the connection never re-establishes. The mobile device has to respond using only the resources embedded in it, what could incur in the re-execution of computations already ran on the remote resources. Remote workers have to continue with the execution so that, in case of reconnection, both parts synchronize its progress to reduce the impact of the disruption.Els últims anys, els dispositius mòbils han guanyat en popularitat i s'han convertit en els dispositius més venuts. Tot i la connectivitat i la bona interacció amb l'usuari que ofereixen, la seva capacitat de càlcul is baixa i limitada per la vida de la bateria. El Mobile Cloud Computing (MCC) és una tecnologia que soluciona les limitacions d'aquests dispositius ajuntant la seva mobilitat amb la gran capacitat de còmput del Cloud. Programar aplicacions per entorns MCC no és tan directe com fer aplicacions monolítiques. Els desenvolupadors han de tractar amb els problemes relacionats amb la programació paral·lela mentre tenen en compte la duració de la bateria i la variabilitat de la xarxa degut a la mobilitat inherent a aquest tipus de dispositius. Com per qualsevol altre entorn distribuït, els desenvolupadors recorren a models de programació que millorin la seva productivitat i els evitin tractar manualment amb aquests problemes delegant la seva gestió en el model. Aquesta tesis contribueix a l'estat de l'art actual amb una adaptació del model de programació COMPSs als entorns MCC. COMPSs permet als desenvolupadors programar les aplicacions de forma agnòstica a la infraestructura i seqüencial sense necessitat d'invocar cap API específica utilitzant el llenguatge natiu de la platforma com si l'aplicació s'executés directament en el mòbil. En temps d'execució, una eina (runtime) automàticament divideix l'aplicació en tasques i n'orquestra la seva execució sobre els recursos disponibles. Aquesta tesis estèn el model de programació per tal de permetre polimorfisme a nivell de tasca i deixar al runtime explotar els recursos computacionals dels que disposa el mòbil a part de la CPU. A més a més, l'arquitectura del runtime s'ha redissenyat tenint en compte les característiques pròpies del MCC, i aquest s'executa com un servei comú al que totes les aplicacions del mòbil contacten per tal d'executar les seves tasques. Per explotar col·laborativament tots els recursos, locals i remots, el runtime agrupa els recursos en Computing Platforms en funció dels mecanismes necessaris per proveir el recurs amb les dades d'entrada necessàries, llançar l'execució i recuperar-ne els resultats. La CPU Platform executa tasques en els nuclis de la CPU. La GPU Platform utilitza OpenCL per executar tasques en forma de kernels a la GPU o altres acceleradors integrats en el mòbil. Finalment, la Cloud Platform descàrrega l'execució de tasques en recursos remots. Per decidir holisticament si és millor executar una tasca en un recurs local o en un remot, el runtime considera els costs (temporal, energètic econòmic) d'executar la tasca en cada una de les plataformes i n'escull la millor. Cada plataforma gestiona internament els seus recursos i orquestra l'execució de les tasques en ells seguint diferents polítiques de planificació. L'ús de recursos locals i remots força la compartició de dades entre els nodes de la infraestructura. Aquestes dades són potencialment sensibles i de caràcter privat i el runtime les exposa a possibles atacs que les transfereix per la xarxa. Per tal de protegir l'usuari de possibles fuites de dades, el runtime ha de dotar les comunicacions amb confidencialitat, integritat i autenticitat. En el cas extrem en que un error de xarxa aïlli el dispositiu mòbil dels nodes remots, el runtime ha d'assegurar que l'execució continua i que eventualment l'usuari rebrà el resultat esperat fins i tot en cas de que la connexió no és restableixi mai. El mòbil ha de ser capaç d'executar l'aplicació utilitzant únicament les dades i recursos disponibles en aquell moment, la qual cosa pot forçar la re-execució d'algunes tasques ja calculades en els recursos remots. Els recursos remots han de continuar l'execució per tal que en cas de reconnexió, ambdues parts sincronitzin el seu progrés i es minimitzi l'impacte de la desconnexió

    Elastic computation placement in edge-based environments

    Get PDF
    Today, technologies such as machine learning, virtual reality, and the Internet of Things are integrated in end-user applications more frequently. These technologies demand high computational capabilities. Especially mobile devices have limited resources in terms of execution performance and battery life. The offloading paradigm provides a solution to this problem and transfers computationally intensive parts of applications to more powerful resources, such as servers or cloud infrastructure. Recently, a new computation paradigm arose which exploits the huge amount of end-user devices in the modern computing landscape - called edge computing. These devices encompass smartphones, tablets, microcontrollers, and PCs. In edge computing, devices cooperate with each other while avoiding cloud infrastructure. Due to the proximity among the participating devices, the communication latencies for offloading are reduced. However, edge computing brings new challenges in form of device fluctuation, unreliability, and heterogeneity, which negatively affect the resource elasticity. As a solution, this thesis proposes a computation placement framework that provides an abstraction for computation and resource elasticity in edge-based environments. The design is middleware-based, encompasses heterogeneous platforms, and supports easy integration of existing applications. It is composed of two parts: the Tasklet system and the edge support layer. The Tasklet system is a flexible framework for computation placement on heterogeneous resources. It introduces closed units of computation that can be tailored to generic applications. The edge support layer handles the characteristics of edge resources. It copes with fluctuation and unreliability by applying reactive and proactive task migration. Furthermore, the performance heterogeneity and the consequent bottlenecks are handled by two edge-specific task partitioning approaches. As a proof of concept, the thesis presents a fully-fledged prototype of the design, which is evaluated comprehensively in a real-world testbed. The evaluation shows that the design is able to substantially improve the resource elasticity in edge-based environments

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016)

    Get PDF
    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016.The PhD Symposium was a very good opportunity for the young researchers to share information and knowledge, to present their current research, and to discuss topics with other students in order to look for synergies and common research topics. The idea was very successful and the assessment made by the PhD Student was very good. It also helped to achieve one of the major goals of the NESUS Action: to establish an open European research network targeting sustainable solutions for ultrascale computing aiming at cross fertilization among HPC, large scale distributed systems, and big data management, training, contributing to glue disparate researchers working across different areas and provide a meeting ground for researchers in these separate areas to exchange ideas, to identify synergies, and to pursue common activities in research topics such as sustainable software solutions (applications and system software stack), data management, energy efficiency, and resilience.European Cooperation in Science and Technology. COS

    System Support For Energy Efficient Mobile Computing

    Get PDF
    Mobile devices are developed rapidly and they have been an integrated part of our daily life. With the blooming of Internet of Things, mobile computing will become more and more important. However, the battery drain problem is a critical issue that hurts user experience. High performance devices require more power support, while the battery capacity only increases 5% per year on average. Researchers are working on kinds of energy saving approaches. For examples, hardware components provide different power state to save idle power; operating systems provide power management APIs to better control power dissipation. However, the system energy efficiency is still low that cannot reach users’ expectation. To improve energy efficiency, we studied how to provide system support for mobile computing in four different aspects. First, we focused on the influence of user behavior on system energy consumption. We monitored and analyzed users’ application usages information. From the results, we built battery prediction model to estimate the battery time based on user behavior and hardware components’ usage. By adjusting user behavior, we can at most double the battery time. To understand why different applications can cause such huge energy difference, we built a power profiler Bugu to figure out where does the power go. Bugu analyzes power and event information for applications, it has high accuracy and low overhead. We analyzed almost 100 mobile applications’ power behavior and several implications are derived to save energy of applications and systems. In addition, to understand the energy behavior of modern hardware architectures, we analyzed the energy consumption and performance of heterogeneous platforms and compared them with homogeneous platforms. The results show that heterogeneous platforms indeed have great potential for energy saving which mostly comes from idle and low workload situations. However, a wrong scheduling decision may cause up to 30% more energy consumption. Scheduling becomes the key point for energy efficient computing. At last, as the increased power density leads to high device temperature, we investigated the thermal management system and developed an ambient temperature aware thermal control policy Falcon. It can save 4.85% total system power and more adaptive in various environments compared with the default approach. Finally, we discussed several potential directions for future research in this field

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore