267 research outputs found

    ATOM: model-driven autoscaling for microservices

    Get PDF
    Microservices based architectures are increasinglywidespread in the cloud software industry. Still, there is ashortage of auto-scaling methods designed to leverage the uniquefeatures of these architectures, such as the ability to indepen-dently scale a subset of microservices, as well as the ease ofmonitoring their state and reciprocal calls.We propose to address this shortage with ATOM, a model-driven autoscaling controller for microservices. ATOM instanti-ates and solves at run-time a layered queueing network model ofthe application. Computational optimization is used to dynami-cally control the number of replicas for each microservice and itsassociated container CPU share, overall achieving a fine-grainedcontrol of the application capacity at run-time.Experimental results indicate that for heavy workloads ATOMoffers around 30%-37% higher throughput than baseline model-agnostic controllers based on simple static rules. We also find thatmodel-driven reasoning reduces the number of actions needed toscale the system as it reduces the number of bottleneck shiftsthat we observe with model-agnostic controllers

    A deep recurrent Q network towards self-adapting distributed microservice architecture

    Get PDF
    One desired aspect of microservice architecture is the ability to self-adapt its own architecture and behavior in response to changes in the operational environment. To achieve the desired high levels of self-adaptability, this research implements distributed microservice architecture model running a swarm cluster, as informed by the Monitor, Analyze, Plan, and Execute over a shared Knowledge (MAPE-K) model. The proposed architecture employs multiadaptation agents supported by a centralized controller, which can observe the environment and execute a suitable adaptation action. The adaptation planning is managed by a deep recurrent Q-learning network (DRQN). It is argued that such integration between DRQN and Markov decision process (MDP) agents in a MAPE-K model offers distributed microservice architecture with self-adaptability and high levels of availability and scalability. Integrating DRQN into the adaptation process improves the effectiveness of the adaptation and reduces any adaptation risks, including resource overprovisioning and thrashing. The performance of DRQN is evaluated against deep Q-learning and policy gradient algorithms, including (1) a deep Q-learning network (DQN), (2) a dueling DQN (DDQN), (3) a policy gradient neural network, and (4) deep deterministic policy gradient. The DRQN implementation in this paper manages to outperform the aforementioned algorithms in terms of total reward, less adaptation time, lower error rates, plus faster convergence and training time. We strongly believe that DRQN is more suitable for driving the adaptation in distributed services-oriented architecture and offers better performance than other dynamic decision-making algorithms
    • …
    corecore