6 research outputs found

    A Comprehensive Review of AI Applications in Automated Container Orchestration, Predictive Maintenance, Security and Compliance, Resource Optimization, and Continuous Deployment and Testing

    Get PDF
    Artificial intelligence (AI) is a rapidly growing field with numerous applications, and containerization is one area where AI can play a significant role. This research discusses various applications of AI in containerization. AI algorithms are increasingly being used to automate various aspects of container orchestration, including predictive maintenance, dynamic resource optimization, and continuous deployment and testing. The use of AI in container orchestration has benefits, including improved performance and efficiency, reduced downtime and failures, and improved security and compliance. Predictive maintenance is one of the key areas where AI algorithms can be used to improve container orchestration. Predictive maintenance algorithms analyze logs and performance data from containers to predict and prevent failures and downtime. The algorithms identify and address performance issues proactively, reducing the risk of downtime and ensuring that applications are always running at optimal performance. The benefits of predictive maintenance include improved reliability and stability, reduced downtime, and improved system performance. Dynamic resource optimization enables organizations to allocate resources more efficiently and effectively, improving the performance and efficiency of their systems and applications. The benefits of dynamic resource optimization include improved resource utilization, reduced resource waste, and improved system performance. However, dynamic resource optimization can also be a complex and challenging process. Continuous deployment and testing enable organizations to deploy and test their applications quickly and efficiently, without introducing new bugs or performance issues. The benefits of continuous deployment and testing include improved reliability and stability, reduced downtime, and improved system performance. However, continuous deployment and testing can also be a complex and challenging process

    KUBERNETES CLUSTER MANAGEMENT FOR CLOUD COMPUTING PLATFORM: A SYSTEMATIC LITERATURE REVIEW

    Get PDF
    Kubernetes is designed to automate the deployment, scaling, and operation of containerized applications. With the scalability feature of Kubernetes technology, container automation processes can be implemented according to the number of concurrent users accessing them. Therefore, this research focuses on how Kubernetes as cluster management is implemented on several cloud computing platforms. Standard literature review method employing a manual search for several journals and conference proceedings. From 15 relevant studies, 5 addressed Kubernetes performance and scalability. Seven literature review addressed Kubernetes deployments. Two articles addressed Kubernetes comparison and the rest is addressed Kubernetes in IoT. Regarding the cloud computing cluster management challenges that must be overcome using Kubernetes: it is necessary to ensure that all configuration and management required for Docker containers are successfully set up on on-premises systems before deploying to the cloud or on-premises. Data from Kubernetes deployments can be leveraged to support capacity planning and design Kubernetes-based elastic applications

    Evaluation of Container Orchestration Systems for Deploying and Managing NoSQL Database Clusters

    No full text
    漏 2018 IEEE. Container orchestration systems, such as Docker Swarm, Kubernetes and Mesos, provide automated support for deployment and management of distributed applications as sets of containers. While these systems were initially designed for running load-balanced stateless services, they have also been used for running database clusters because of improved resilience attributes such as fast auto-recovery of failed database nodes, and location transparency at the level of TCP/IP connections between database instances. In this paper we evaluate the performance overhead of Docker Swarm and Kubernetes for deploying and managing NoSQL database clusters, with MongoDB as database case study. As the baseline for comparison, we use an OpenStack IaaS cloud that also allows attaining these improved resilience attributes although in a less automated manner.location: San Francisco, USA keywords: Docker Swarm, Kubernetes, OpenStack IaaS cloud, MongoDB, Performance evaluationstatus: publishe

    Plataforma colaborativa, distribuida, escalable y de bajo costo basada en microservicios, contenedores, dispositivos m贸viles y servicios en la Nube para tareas de c贸mputo intensivo

    Get PDF
    A la hora de resolver tareas de c贸mputo intensivo de manera distribuida y paralela, habitualmente se utilizan recursos de hardware x86 (CPU/GPU) e infraestructura especializada (Grid, Cluster, Nube) para lograr un alto rendimiento. En sus inicios los procesadores, coprocesadores y chips x86 fueron desarrollados para resolver problemas complejos sin tener en cuenta su consumo energ茅tico. Dado su impacto directo en los costos y el medio ambiente, optimizar el uso, refrigeraci贸n y gasto energ茅tico, as铆 como analizar arquitecturas alternativas, se convirti贸 en una preocupaci贸n principal de las organizaciones. Como resultado, las empresas e instituciones han propuesto diferentes arquitecturas para implementar las caracter铆sticas de escalabilidad, flexibilidad y concurrencia. Con el objetivo de plantear una arquitectura alternativa a los esquemas tradicionales, en esta tesis se propone ejecutar las tareas de procesamiento reutilizando las capacidades ociosas de los dispositivos m贸viles. Estos equipos integran procesadores ARM los cuales, en contraposici贸n a las arquitecturas tradicionales x86, fueron desarrollados con la eficiencia energ茅tica como pilar fundacional, ya que son mayormente alimentados por bater铆as. Estos dispositivos, en los 煤ltimos a帽os, han incrementado su capacidad, eficiencia, estabilidad, potencia, as铆 como tambi茅n masividad y mercado; mientras conservan un precio, tama帽o y consumo energ茅tico reducido. A su vez, cuentan con lapsos de ociosidad durante los per铆odos de carga, lo que representa un gran potencial que puede ser reutilizado. Para gestionar y explotar adecuadamente estos recursos, y convertirlos en un centro de datos de procesamiento intensivo; se dise帽贸, desarroll贸 y evalu贸 una plataforma distribuida, colaborativa, el谩stica y de bajo costo basada en una arquitectura compuesta por microservicios y contenedores orquestados con Kubernetes en ambientes de Nube y local, integrada con herramientas, metodolog铆as y pr谩cticas DevOps. El paradigma de microservicios permiti贸 que las funciones desarrolladas sean fragmentadas en peque帽os servicios, con responsabilidades acotadas. Las pr谩cticas DevOps permitieron construir procesos automatizados para la ejecuci贸n de pruebas, trazabilidad, monitoreo e integraci贸n de modificaciones y desarrollo de nuevas versiones de los servicios. Finalmente, empaquetar las funciones con todas sus dependencias y librer铆as en contenedores ayud贸 a mantener servicios peque帽os, inmutables, portables, seguros y estandarizados que permiten su ejecuci贸n independiente de la arquitectura subyacente. Incluir Kubernetes como Orquestador de contenedores, permiti贸 que los servicios se puedan administrar, desplegar y escalar de manera integral y transparente, tanto a nivel local como en la Nube, garantizando un uso eficiente de la infraestructura, gastos y energ铆a. Para validar el rendimiento, escalabilidad, consumo energ茅tico y flexibilidad del sistema, se ejecutaron diversos escenarios concurrentes de transcoding de video. De esta manera se pudo probar, por un lado, el comportamiento y rendimiento de diversos dispositivos m贸viles y x86 bajo diferentes condiciones de estr茅s. Por otro lado, se pudo mostrar c贸mo a trav茅s de una carga variable de tareas, la arquitectura se ajusta, flexibiliza y escala para dar respuesta a las necesidades de procesamiento. Los resultados experimentales, sobre la base de los diversos escenarios de rendimiento, carga y saturaci贸n planteados, muestran que se obtienen mejoras 煤tiles sobre la l铆nea de base de este estudio y que la arquitectura desarrollada es lo suficientemente robusta para considerarse una alternativa escalable, econ贸mica y el谩stica, respecto a los modelos tradicionales.Facultad de Inform谩tic
    corecore