12 research outputs found

    Adapted Cloudlet for Mobile Distance Learning: Design, Prototype and Evaluation

    Get PDF
    The Open and Distance Learning (ODL) currently operated by some institutions in Nigeria has problems of accessibility from remote locations due to bandwidth and latency issues. To address accessibility problem caused by network delays, cloudlet computing is introduced. With cloudlet technology, distant mobile learners are able to connect to the nearby cloudlet and access the learning content. Instead of relying on a distant cloud with latency issues during access, a nearby cloudlet with rich resources could go a long way to address the resource poverty of a mobile device. This paper presents an architectural design and a prototype implementation of an adapted cloudlet for mobile distance learning. The paper proposes a feature in which the learning content in the cloud or cloudlet platform is adapted to a nearby mobile learner depending on the availability of platform with minimum delays in terms of bandwidth and latency

    Little Boxes: A Dynamic Optimization Approach for Enhanced Cloud Infrastructures

    Full text link
    The increasing demand for diverse, mobile applications with various degrees of Quality of Service requirements meets the increasing elasticity of on-demand resource provisioning in virtualized cloud computing infrastructures. This paper provides a dynamic optimization approach for enhanced cloud infrastructures, based on the concept of cloudlets, which are located at hotspot areas throughout a metropolitan area. In conjunction, we consider classical remote data centers that are rigid with respect to QoS but provide nearly abundant computation resources. Given fluctuating user demands, we optimize the cloudlet placement over a finite time horizon from a cloud infrastructure provider's perspective. By the means of a custom tailed heuristic approach, we are able to reduce the computational effort compared to the exact approach by at least three orders of magnitude, while maintaining a high solution quality with a moderate cost increase of 5.8% or less

    Mobile Cloud Computing Model and Big Data Analysis for Healthcare Applications

    Get PDF
    Mobile devices are increasingly becoming an indispensable part of people\u27s daily life, facilitating to perform a variety of useful tasks. Mobile cloud computing integrates mobile and cloud computing to expand their capabilities and benefits and overcomes their limitations, such as limited memory, CPU power, and battery life. Big data analytics technologies enable extracting value from data having four Vs: volume, variety, velocity, and veracity. This paper discusses networked healthcare and the role of mobile cloud computing and big data analytics in its enablement. The motivation and development of networked healthcare applications and systems is presented along with the adoption of cloud computing in healthcare. A cloudlet-based mobile cloud-computing infrastructure to be used for healthcare big data applications is described. The techniques, tools, and applications of big data analytics are reviewed. Conclusions are drawn concerning the design of networked healthcare systems using big data and mobile cloud-computing technologies. An outlook on networked healthcare is given

    Mobile Edge Cloud Network Design Optimization

    Get PDF
    Major interest is currently given to the integration of clusters of virtualization servers, also referred to as 'cloudlets' or 'edge clouds', into the access network to allow higher performance and reliability in the access to mobile edge computing services. We tackle the edge cloud network design problem for mobile access networks. The model is such that the virtual machines (VMs) are associated with mobile users and are allocated to cloudlets. Designing an edge cloud network implies first determining where to install cloudlet facilities among the available sites, then assigning sets of access points, such as base stations to cloudlets, while supporting VM orchestration and considering partial user mobility information, as well as the satisfaction of service-level agreements. We present link-path formulations supported by heuristics to compute solutions in reasonable time. We qualify the advantage in considering mobility for both users and VMs as up to 20% less users not satisfied in their SLA with a little increase of opened facilities. We compare two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration, while bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints

    Conception d'un modÚle architectural collaboratif pour l'informatique omniprésente à la périphérie des réseaux mobiles

    Get PDF
    Le progrĂšs des technologies de communication pair-Ă -pair et sans fil a de plus en plus permis l’intĂ©gration de dispositifs portables et omniprĂ©sents dans des systĂšmes distribuĂ©s et des architectures informatiques de calcul dans le paradigme de l’internet des objets. De mĂȘme, ces dispositifs font l'objet d'un dĂ©veloppement technologique continu. Ainsi, ils ont toujours tendance Ă  se miniaturiser, gĂ©nĂ©ration aprĂšs gĂ©nĂ©ration durant lesquelles ils sont considĂ©rĂ©s comme des dispositifs de facto. Le fruit de ces progrĂšs est l'Ă©mergence de l'informatique mobile collaborative et omniprĂ©sente, notamment intĂ©grĂ©e dans les modĂšles architecturaux de l'Internet des Objets. L’avantage le plus important de cette Ă©volution de l'informatique est la facilitĂ© de connecter un grand nombre d'appareils omniprĂ©sents et portables lorsqu'ils sont en dĂ©placement avec diffĂ©rents rĂ©seaux disponibles. MalgrĂ© les progrĂšs continuels, les systĂšmes intelligents mobiles et omniprĂ©sents (rĂ©seaux, dispositifs, logiciels et technologies de connexion) souffrent encore de diverses limitations Ă  plusieurs niveaux tels que le maintien de la connectivitĂ©, la puissance de calcul, la capacitĂ© de stockage de donnĂ©es, le dĂ©bit de communications, la durĂ©e de vie des sources d’énergie, l'efficacitĂ© du traitement de grosses tĂąches en termes de partitionnement, d'ordonnancement et de rĂ©partition de charge. Le dĂ©veloppement technologique accĂ©lĂ©rĂ© des Ă©quipements et dispositifs de ces modĂšles mobiles s'accompagne toujours de leur utilisation intensive. Compte tenu de cette rĂ©alitĂ©, plus d'efforts sont nĂ©cessaires Ă  la fois dans la conception structurelle tant au matĂ©riel et logiciel que dans la maniĂšre dont il est gĂ©rĂ©. Il s'agit d'amĂ©liorer, d'une part, l'architecture de ces modĂšles et leurs technologies de communication et, d'autre part, les algorithmes d'ordonnancement et d'Ă©quilibrage de charges pour effectuer leurs travaux efficacement sur leurs dispositifs. Notre objectif est de rendre ces modĂšles omniprĂ©sents plus autonomes, intelligents et collaboratifs pour renforcer les capacitĂ©s de leurs dispositifs, leurs technologies de connectivitĂ© et les applications qui effectuent leurs tĂąches. Ainsi, nous avons Ă©tabli un modĂšle architectural autonome, omniprĂ©sent et collaboratif pour la pĂ©riphĂ©rie des rĂ©seaux. Ce modĂšle s'appuie sur diverses technologies de connexion modernes telles que le sans-fil, la radiocommunication pair-Ă -pair, et les technologies offertes par LoPy4 de Pycom telles que LoRa, BLE, Wi-Fi, Radio Wi-Fi et Bluetooth. L'intĂ©gration de ces technologies permet de maintenir la continuitĂ© de la communication dans les divers environnements, mĂȘme les plus sĂ©vĂšres. De plus, ce modĂšle conçoit et Ă©value un algorithme d'Ă©quilibrage de charge et d'ordonnancement permettant ainsi de renforcer et amĂ©liorer son efficacitĂ© et sa qualitĂ© de service (QoS) dans diffĂ©rents environnements. L’évaluation de ce modĂšle architectural montre des avantages tels que l’amĂ©lioration de la connectivitĂ© et l’efficacitĂ© d’exĂ©cution des tĂąches. Advances in peer-to-peer and wireless communication technologies have increasingly enabled the integration of mobile and pervasive devices into distributed systems and computing architectures in the Internet of Things paradigm. Likewise, these devices are subject to continuous technological development. Thus, they always tend to be miniaturized, generation after generation during which they are considered as de facto devices. The success of this progress is the emergence of collaborative mobiles and pervasive computing, particularly integrated into the architectural models of the Internet of Things. The most important benefit of this form of computing is the ease of connecting a large number of pervasive and portable devices when they are on the move with different networks available. Despite the continual advancements that support this field, mobile and pervasive intelligent systems (networks, devices, software and connection technologies) still suffer from various limitations at several levels such as maintaining connectivity, computing power, ability to data storage, communication speeds, the lifetime of power sources, the efficiency of processing large tasks in terms of partitioning, scheduling and load balancing. The accelerated technological development of the equipment and devices of these mobile models is always accompanied by their intensive use. Given this reality, it requires more efforts both in their structural design and management. This involves improving on the one hand, the architecture of these models and their communication technologies, and, on the other hand, the scheduling and load balancing algorithms for the work efficiency. The goal is to make these models more autonomous, intelligent, and collaborative by strengthening the different capabilities of their devices, their connectivity technologies and the applications that perform their tasks. Thus, we have established a collaborative autonomous and pervasive architectural model deployed at the periphery of networks. This model is based on various modern connection technologies such as wireless, peer-to-peer radio communication, and technologies offered by Pycom's LoPy4 such as LoRa, BLE, Wi-Fi, Radio Wi-Fi and Bluetooth. The integration of these technologies makes it possible to maintain the continuity of communication in the various environments, even the most severe ones. Within this model, we designed and evaluated a load balancing and scheduling algorithm to strengthen and improve its efficiency and quality of service (QoS) in different environments. The evaluation of this architectural model shows payoffs such as improvement of connectivity and efficiency of task executions

    Online algorithms for content caching: an economic perspective

    Get PDF
    Content Caching at intermediate nodes, such that future requests can be served without going back to the origin of the content, is an effective way to optimize the operations of computer networks. Therefore, content caching reduces the delivery delay and improves the users’ Quality of Experience (QoE). The current literature either proposes offline algorithms that have complete knowledge of the request profile a priori, or proposes heuristics without provable performance. In this dissertation, online algorithms are presented for content caching in three different network settings: the current Internet Network, collaborative multi-cell coordinated network, and future Content Centric Networks (CCN). Due to the difficulty of obtaining a prior knowledge of contents’ popularities in real scenarios, an algorithm has to make a decision whether to cache a content or not when a request for the content is made, and without the knowledge of any future requests. The performance of the online algorithms is measured through a competitive ratio analysis, comparing the performance of the online algorithm to that of an omniscient optimal offline algorithm. Through theoretical analyses, it is shown that the proposed online algorithms achieve either the optimal or close to the optimal competitive ratio. Moreover, the algorithms have low complexity and can be implemented in a distributed way. The theoretical analyses are complemented with simulation-based experiments, and it is shown that the online algorithms have better performance compared to the state of the art caching schemes

    Architecture d'applications mobiles dans le domaine de la santé

    Get PDF
    RÉSUMÉ Durant ces derniĂšres annĂ©es, le monde informatique a assistĂ© Ă  une percĂ©e spectaculaire des technologies dans le domaine de l'informatique mobile. Cette avancĂ©e considĂ©rable permet d'offrir de plus en plus de services non seulement aux usagers de l'informatique directement, mais aussi Ă  d'autres secteurs en quĂȘte de solutions par rapport Ă  des problĂšmes jusqu'ici demeurĂ©s non rĂ©solus. On peut citer notamment le secteur de la santĂ©. Le monde informatique permet aujourd'hui Ă  un mĂ©decin, muni d'un terminal mobile, de consulter un patient Ă  distance dans n'importe quelle rĂ©gion du monde sans se dĂ©placer. Les rĂ©sultats de ses consultations, une fois recueillis, sont exploitables de façon instantanĂ©e. Ce qui est encore plus intĂ©ressant avec cette technologie est le fait que le mĂ©decin dispose aujourd'hui de capteurs biologiques qu'il peut accoler au patient pour pouvoir faire un suivi constant par rapport Ă  une Ă©volution quelconque de sa situation. Cette façon d'assister les patients Ă  distance a aussi des failles. En effet, on doit prendre en compte la capacitĂ© limitĂ©e des terminaux mobiles et la complexitĂ© de certaines tĂąches qui demandent une grande puissance de calcul. La solution la plus prisĂ©e pour essayer de rĂ©soudre ce problĂšme est celle qui permet d'externaliser les tĂąches de calcul sur des serveurs distants, disposant de la capacitĂ© nĂ©cessaire Ă  les faire exĂ©cuter. GrĂące Ă  l'arrivĂ©e du Cloud Computing, permettant d'exploiter de gros serveurs disposant de ressources virtuellement illimitĂ©es, les requĂȘtes qui consomment beaucoup de ressources sont envoyĂ©es dans le Cloud et le rĂ©sultat trouvĂ© est exploitĂ© par la suite sur le terminal mobile. Dans le cadre de ce mĂ©moire, nous proposons une architecture sujette Ă  un ensemble de contraintes de performance qui nous permet de tenir compte du meilleur endroit d’exĂ©cution des tĂąches. Pour arriver Ă  cette rĂ©alisation, nous proposons un mĂ©canisme de gestion du dĂ©lai de garde Ă  l'envoi des requĂȘtes dans le Cloud pour limiter le temps d'attente sur les terminaux mobiles. Ce mĂ©canisme permet de mieux contrĂŽler les requĂȘtes qui sont envoyĂ©es dans le Cloud Ă©tant donnĂ© que l'environnement dans lequel Ă©voluent les terminaux mobiles est exposĂ© Ă  un ensemble de changements qui peuvent arriver n'importe quand et pour des raisons multiples. Pour dĂ©cider de l'endroit d'exĂ©cution d’une tĂąche Ă©voluant dans le systĂšme, le code de notre application doit ĂȘtre partitionnĂ© en deux parties. Une premiĂšre partie qui sera toujours exĂ©cutĂ©e sur le tĂ©lĂ©phone et la seconde partie qui peut ĂȘtre exĂ©cutĂ©e soit sur le tĂ©lĂ©phone soit dans le Cloud. Le dĂ©veloppeur est tenu responsable de l'identification de ces deux parties et doit implĂ©menter la deuxiĂšme partition Ă  la fois dans le Cloud et sur le tĂ©lĂ©phone. Étant donnĂ© que l'environnement dans lequel Ă©volue notre application change constamment, nous questionnons les conditions environnementales Ă  chaque fois qu’une tĂąche est sujette Ă  la dĂ©portation dans le Cloud. Nous avons pour cela conçu un systĂšme expert, composĂ© d'une base de connaissances et d'un moteur d'infĂ©rence qui nous permet de dĂ©cider de l'endroit d'exĂ©cution de nos tĂąches. Le moteur d'infĂ©rence dĂ©cide sur la base d'un ensemble de rĂšgles et de faits oĂč exactement une tĂąche donnĂ©e doit ĂȘtre exĂ©cutĂ©e. Afin de valider notre architecture, un ensemble de tests a Ă©tĂ© rĂ©alisĂ©. Nous avons obtenu des rĂ©sultats assez positifs qui nous permettent de conclure que le Cloud peut effectivement ĂȘtre utilisĂ© pour amĂ©liorer la performance des applications sur les terminaux mobiles. Ces tests sont rĂ©alisĂ©s en utilisant la solution EC2 d’Amazon qui nous permet d’externaliser nos tĂąches dans un centre de donnĂ©es situĂ© aux États-Unis.----------ABSTRACT In recent years, the world of information technology has witnessed a spectacular breakthrough in mobile computing area. With those advances, more and more services are being offered not only to people who are in mobile communications industry, but also to those ones who are looking for solutions to some problems that remained so far unresolved. Health care area is a good example. The computing world today allows doctors, staying at home, just by using a mobile device, to offer their services to patients remotely located anywhere in the world. The results, once collected, become fully usable instantly. What is even more interesting with those technologies is that they can use biosensors, that they put on the patients to collect data, to constantly monitor the evolution of any situation. The way to assist patients remotely, using those technologies, also has vulnerabilities. Indeed, given the limited capacity in terms of resources of mobile devices and the complexity of some algorithms in the medical sector which require a lot of computation, the most popular proposed solution is to offload the most complicated tasks to external powerful servers. Thanks to the arrival of Cloud Computing in recent years, that enables computing resources to be accessed and shared as unlimited powerful virtual resources in a scalable way, most complicated tasks are sent to the Cloud and the results are used subsequently on mobile devices. Different proposals with computation offloading approaches have been released to try to solve this resource limitation problem. In this thesis, we propose an architecture subject to a set of constraints to performance allowing us to take into account the best place to execute our tasks. To get this done, we have proposed a mechanism to manage the timeout of the requests that are sent to the Cloud given that the system environment is changing constantly and for a lot of reasons out of control of the system. To make use of computation offloading, we have partitioned our codes in two parts. One that will always be running on the mobile device and the other one that can be run on both sides either mobile device or Cloud. The developer needs to identify parts of codes that will ask for more resources to get them implemented in the Cloud also. Given that the environment in which our application is being run is changing constantly, we question the environmental parameters on every task that can be offloaded to decide on the spot where exactly this task should be run at this moment. We have design an expert system, using a knowledge base and an inference engine to decide the best place to run our tasks. The inference engine decides, based on a set of rules and a set of facts where exactly it should execute a task. In order to validate our architecture, a set of tests have been conducted. We got positive results that allow us to conclude that the Cloud can actually be used to improve the performance of applications on mobile devices. Those tests are performed by using Amazon EC2 solution that allows us to offload our tasks in a data centre located in the United States

    Efficient Context-aware Real-time Processing of Personal Data Streams

    Get PDF
    In this dissertation we propose a framework for the development of innovative mobile applications that are context-aware in processing of real-time personal data streams by taking into account the resource limitation on mobile devices, in order to achieve an efficient processing of real-time sensor data on mobile devices for various use cases
    corecore