Machine Learning Interference Modelling for Cloud-native Applications

Abstract

Modern cloud-native applications use microservice architecture patterns, where fine granular software components are deployed in lightweight containers that run inside cloud virtual machines. To utilize resources more efficiently, containers belonging to different applications are often co-located on the same virtual machine. Co-location can result in software performance degradation due to interference among components competing for resources. In this thesis, we propose techniques to detect and model performance interference. To detect interference at runtime, we train Machine Learning (ML) models prior to deployment using interfering benchmarks and show that the model can be generalized to detect runtime interference from different types of applications. Experimental results in public clouds show that our approach outperforms existing interference detection techniques by 1.35%-66.69%. To quantify the intereference impact, we further propose a ML interference quantification technique. The technique constructs ML models for response time prediction and can dynamically account for changing runtime conditions through the use of a sliding window method. Our technique outperforms baseline and competing techniques by 1.45%-92.04%. These contributions can be beneficial to software architects and software operators when designing, deploying, and operating cloud-native applications

    Similar works