3,119 research outputs found

    Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge

    Get PDF
    Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud. Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others. We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application

    Scaling Virtualized Smartphone Images in the Cloud

    Get PDF
    Üks selle Bakalaureuse töö eesmärkidest oli Android-x86 nutitelefoni platvormi juurutamine pilvekeskkonda ja välja selgitamine, kas valitud instance on piisav virtualiseeritud nutitelefoni platvormi juurutamiseks ning kui palju koormust see talub. Töös kasutati Amazoni instance'i M1 Small, mis oli piisav, et juurutada Androidi virtualiseeritud platvormi, kuid jäi kesisemaks kui mobiiltelefon, millel teste läbi viidi. M1 Medium instance'i tüüp oli sobivam ja näitas paremaid tulemusi võrreldes telefoniga. Teostati koormusteste selleks vastava tööriistaga Tsung, et näha, kui palju üheaegseid kasutajaid instance talub. Testi läbiviimiseks paigaldasime Dalviku instance'ile Tomcat serveri. Pärast teste ühe eksemplariga, juurutasime külge Elastic Load Balancing ja automaatse skaleerimise Amazon Auto Scaling tööriista. Esimene neist jaotas koormust instance'ide vahel. Automaatse skaleerimise tööriista kasutasime, et rakendada horisontaalset skaleerimist meie Android-x86 instance'le. Kui CPU tõusis üle 60% kauemaks kui üks minut, siis tehti eelmisele identne instance ja koormust saadeti edaspidi sinna. Seda protseduuri vajadusel korrati maksimum kümne instance'ini. Meie teostusel olid tagasilöögid, sest Elastic Load Balancer aegus 60 sekundi pärast ning me ei saanud kõikide välja saadetud päringutele vastuseid. Serverisse saadetud faili kirjutamine ja kompileerimine olid kulukad tegevused ja seega ei lõppenud kõik 60 sekundi jooksul. Me ei saanud koos Load Balancer'iga läbiviidud testidest piisavalt andmeid, et teha järeldusi, kas virtualiseeritud nutitelefoni platvorm Android on hästi või halvasti skaleeruv.In this thesis we deployed a smartphone image in an Amazon EC2 instance and ran stress tests on them to know how much users can one instance bear and how scalable it is. We tested how much time would a method run in a physical Android device and in a cloud instance. We deployed CyanogenMod and Dalvik for a single instance. We used Tsung for stress testing. For those tests we also made a Tomcat server on Dalvik instance that would take the incoming file, the file would be compiled with java and its class file would be wrapped into dex, a Dalvik executable file, that is later executed with Dalvik. Three instances made a Tsung cluster that sent load to a Dalvik Virtual Machine instance. For scaling we used Amazon Auto Scaling tool and Elastic Load Balancer that divided incoming load between the instances

    Orchestrating Service Migration for Low Power MEC-Enabled IoT Devices

    Full text link
    Multi-Access Edge Computing (MEC) is a key enabling technology for Fifth Generation (5G) mobile networks. MEC facilitates distributed cloud computing capabilities and information technology service environment for applications and services at the edges of mobile networks. This architectural modification serves to reduce congestion, latency, and improve the performance of such edge colocated applications and devices. In this paper, we demonstrate how reactive service migration can be orchestrated for low-power MEC-enabled Internet of Things (IoT) devices. Here, we use open-source Kubernetes as container orchestration system. Our demo is based on traditional client-server system from user equipment (UE) over Long Term Evolution (LTE) to the MEC server. As the use case scenario, we post-process live video received over web real-time communication (WebRTC). Next, we integrate orchestration by Kubernetes with S1 handovers, demonstrating MEC-based software defined network (SDN). Now, edge applications may reactively follow the UE within the radio access network (RAN), expediting low-latency. The collected data is used to analyze the benefits of the low-power MEC-enabled IoT device scheme, in which end-to-end (E2E) latency and power requirements of the UE are improved. We further discuss the challenges of implementing such schemes and future research directions therein
    corecore