279 research outputs found

    DeepScaler: Holistic Autoscaling for Microservices Based on Spatiotemporal GNN with Adaptive Graph Learning

    Full text link
    Autoscaling functions provide the foundation for achieving elasticity in the modern cloud computing paradigm. It enables dynamic provisioning or de-provisioning resources for cloud software services and applications without human intervention to adapt to workload fluctuations. However, autoscaling microservice is challenging due to various factors. In particular, complex, time-varying service dependencies are difficult to quantify accurately and can lead to cascading effects when allocating resources. This paper presents DeepScaler, a deep learning-based holistic autoscaling approach for microservices that focus on coping with service dependencies to optimize service-level agreements (SLA) assurance and cost efficiency. DeepScaler employs (i) an expectation-maximization-based learning method to adaptively generate affinity matrices revealing service dependencies and (ii) an attention-based graph convolutional network to extract spatio-temporal features of microservices by aggregating neighbors' information of graph-structural data. Thus DeepScaler can capture more potential service dependencies and accurately estimate the resource requirements of all services under dynamic workloads. It allows DeepScaler to reconfigure the resources of the interacting services simultaneously in one resource provisioning operation, avoiding the cascading effect caused by service dependencies. Experimental results demonstrate that our method implements a more effective autoscaling mechanism for microservice that not only allocates resources accurately but also adapts to dependencies changes, significantly reducing SLA violations by an average of 41% at lower costs.Comment: To be published in the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE 2023

    Scalable Software Platform Architecture for the Power Distribution Protection and Analysis

    Get PDF
    This thesis explores the benefits of microservice architecture over traditional monolithic application architecture and traditional environments for deploying software to the cloud or the edge. The microservice architecture consists of multiple services that serve a single purpose and all separate functions of the application are stored in their own containers. Containers are separate environments based on the Linux kernel. This thesis was done for ABB (ASEA Brown Boveri) Distribution Solutions to modernize one of their existing applications. The main goal of this thesis is to describe the transition from a monolithic application architecture to a micro-service architecture. However, during the case study, we encountered problems that prevented us from going through with the project. The most significant of these problems was the high degree of dependence of the monolithic application between different parts of the program. The end result of the project was to be a proof of concept-level software. We couldn't achieve it. We used design science as a methodology to guide us in decision-making. We chose Action Design Research (ADR) as the methodology for our work because we found it supported interactive work. This fits in very well with our situation as we were doing this research daily at the ABB’s office. Design science primarily aims at the end result, which in our case would have been to plant the old application in the new architecture. One of our most important results is that we were able to identify critical issues that need to be addressed before moving from monolithic to microservice architecture. These findings included technological debt accumulated over the years, incomplete knowledge of the legacy application, and internal system dependencies. These dependencies represent a significant challenge in re-structuring the monolith to a microarchitecture. As a fourth finding, we found that the resources available, such as time, experts and funding, must be sufficient to produce an appropriate result. As a theoretical contribution, we produced our own version of the Action Design Research Method. We combined the first two steps of the method so that while the customer organization was defining the problem, our research team provided solutions to the problem. Of these solutions, the client organization chose the one that suited them best. This process was possible because we had an open and continuing discussion with ABB's development unit.Tämä opinnäytetyö tutkii mikropalveluarkkitehtuurin etuja verrattuna perinteiseen monoliittiseen sovellusarkkitehtuuriin ja perinteisiin ympäristöihin. Mikropalveluarkkitehtuuri koostuu useista palveluista, jotka palvelevat kukin yhtä tarkoitusta ja sovelluksen kaikki erilliset toiminnot ovat omissa säilöissään, joita kutsutaan konteiksi. Kontit perustuvat Linux-ytimeen. Tämä opinnäytetyö tehtiin ABB (ASEA Brown Boveri) Distribution Solutions -yritykselle yhden nykyisen sovelluksen nykyaikaistamiseksi. Tämän tutkielmamme päätavoite on kuvata siirtyminen monoliittisesta sovellusarkkitehtuurista mikropalveluarkkitehtuuriin. Törmäsimme työn aikana kuitenkin ongelmiin, jotka estivät työn läpimenon. Näistä ongelmista merkittävin oli monoliittisen sovelluksen korkea riippuvuuden aste eri ohjelman osien välillä. Projektin lopputuloksen oli tarkoitus olla todiste konseptitason ohjelmistosta. Sitä emme pystyneet toteuttamaan. Käytimme toimintamallitutkimusta metodologiana ohjaamaan meitä päätöksenteossa. Valitsimme toimintamallitutkimuksen työmme metodologiaksi, koska huomasimme sen tukevan interaktiivista työskentelyä. Tämä sopi meidän tilanteeseemme erittäin hyvin, sillä olimme päivittäin ABB:n toimistolla tekemässä tätä tutkimusta. Toimintamallitutkimus tähtää ensisijaisesti lopputulokseen, joka meidän tapauksessamme olisi ollut vanhan sovelluksen istutus uuteen arkkitehtuuriin. Yksi tärkeimmistä tuloksistamme on, että onnistuimme määrittelemään kriittiset kysymykset, joihin on puututtava ennen siirtymistä monoliittisesta arkkitehtuurista mikropalveluarkkitehtuuriin. Näitä löydöksiä olivat vuosien aikana kertynyt teknologinen velka, epätäydellinen tieto vanhasta sovelluksesta sekä järjestelmän sisäiset riippuvuudet. Nämä riippuvuudet muodostavat merkittävän haasteen monoliitin uudelleen jäsentämisessä mikroarkkitehtuurin mukaiseksi. Neljäntenä löydöksenä huomasimme, että käytettävissä olevia resursseja, kuten aika, asiantuntijat ja rahoitus, on oltava riittävästi tarkoituksenmukaisen tuloksen saavuttamiseksi. Teoreettisena kontribuutiona tuotimme oman versiomme Action Design Research -menetelmään. Yhdistimme menetelmän kaksi ensimmäistä vaihetta siten, että samalla, kun asiakasorganisaatio määritteli ongelmaa, tutkimusryhmämme tarjosi ongelmiin ratkaisuja. Näistä ratkaisuista asiakasorganisaatio valitsi heille sopivimman. Tämä prosessi oli mahdollinen, koska kävimme avointa jatkuvaa keskustelua ABB:n kehitysyksikön kanssa

    Towards an Automatic Microservices Manager for Hybrid Cloud Edge Environments

    Get PDF
    Cloud computing came to make computing resources easier to access thus helping a faster deployment of applications/services benefiting from the scalability provided by the service providers. It has been registered an exponential growth of the data volume received by the cloud. This is due to the fact that almost every device used in everyday life are connected to the internet sharing information in a global scale (ex: smartwatches, clocks, cars, industrial equipment’s). Increasing the data volume results in an increased latency in client applications resulting in the degradation of the QoS (Quality of service). With these problems, hybrid systems were born by integrating the cloud resources with the various edge devices between the cloud and edge, Fog/Edge computation. These devices are very heterogeneous, with different resources capabilities (such as memory and computational power), and geographically distributed. Software architectures also evolved and microservice architecture emerged to make application development more flexible and increase their scalability. The Microservices architecture comprehends decomposing monolithic applications into small services each one with a specific functionality and that can be independently developed, deployed and scaled. Due to their small size, microservices are adquate for deployment on Hybrid Cloud/Edge infrastructures. However, the heterogeneity of those deployment locations makes microservices’ management and monitoring rather complex. Monitoring, in particular, is essential when considering that microservices may be replicated and migrated in the cloud/edge infrastructure. The main problem this dissertation aims to contribute is to build an automatic system of microservices management that can be deployed in hybrid infrastructures cloud/fog computing. Such automatic system will allow edge enabled applications to have an adaptive deployment at runtime in response to variations inworkloads and computational resources available. Towards this end, this work is a first step on integrating two existing projects that combined may support an automatic system. One project does the automatic management of microservices but uses only an heavy monitor, Prometheus, as a cloud monitor. The second project is a light adaptive monitor. This thesis integrates the light monitor into the automatic manager of microservices.A computação na Cloud surgiu como forma de simplificar o acesso aos recursos computacionais, permitindo um deployment mais rápido das aplicações e serviços como resultado da escalabilidade suportada pelos provedores de serviços. Computação na cloud surgiu para facilitar o acesso aos recursos de computação provocando um facultamento no deployment de aplicações/serviços sendo benéfico para a escalabilidade fornecida pelos provedores de serviços. Tem-se registado um crescimento exponencial do volume de data que é recebido pela cloud. Este aumento deve-se ao facto de quase todos os dispositivos utilizados no nosso quotidiano estarem conectados à internet (exemplos destes são, relogios, maquinas industriais, carros). Este aumento no volume de dados resulta num aumento da latência para as aplicações cliente, resultando assim numa degradação na qualidade de serviço QoS. Com estes problemas, nasceram os sistemas híbridos, nascidos pela integração dos recursos de cloud com os variados dispositivos presentes no caminho entre a cloud e a periferia denominando-se computação na Edge/Fog (Computação na periferia). Estes dispositivos apresentam uma grande heterogeneidade e são geograficamente muito distribuídos. As arquitecturas dos sistemas também evoluíram emergindo a arquitectura de micro serviços que permitem tornar o desenvolvimento de aplicações não só mais flexivel como para aumentar a sua escalabilidade. A arquitetura de micro serviços consiste na decomposição de aplicações monolíticas em pequenos serviços, onde cada um destes possuí uma funcionalidade específica e que pode ser desenvolvido, lançado e migrado de forma independente. Devido ao seu tamanho os micro serviços são adequados para serem lançados em ambientes de infrastructuras híbridas (cloud e periferia). No entanto, a heterogeneidade da localização para serem lançados torna a gestão e monitorização de micro serviços bastante mais complexa. A monitorização, em particular, é essencial quando consideramos que os micro serviços podem ser replicados e migrados nestas infrastruturas de cloud e periferia (Edge). O problema abordado nesta dissertação é contribuir para a construção de um sistema automático de gestão de micro serviços que podem ser lançados em estruturas hibridas. Este sistema automático irá tornar possível às aplicações que estão na edge possuírem um deploy adaptativo enquanto estão em execução, como resposta às variações dos recursos computacionais disponíveis e suas cargas. Para chegar a este fim, este trabalho será o primeiro passo na integração de dois projectos já existentes que, juntos poderão suportar umsistema automático. Umdeles realiza a gestão automática de micro serviços mas utiliza apenas o Prometheus como monitor na cloud, enquanto o segundo projecto é um monitor leve adaptativo. Esta tese integra então um monitor leve com um gestor automático de micro serviços

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    An elastic software architecture for extreme-scale big data analytics

    Get PDF
    This chapter describes a software architecture for processing big-data analytics considering the complete compute continuum, from the edge to the cloud. The new generation of smart systems requires processing a vast amount of diverse information from distributed data sources. The software architecture presented in this chapter addresses two main challenges. On the one hand, a new elasticity concept enables smart systems to satisfy the performance requirements of extreme-scale analytics workloads. By extending the elasticity concept (known at cloud side) across the compute continuum in a fog computing environment, combined with the usage of advanced heterogeneous hardware architectures at the edge side, the capabilities of the extreme-scale analytics can significantly increase, integrating both responsive data-in-motion and latent data-at-rest analytics into a single solution. On the other hand, the software architecture also focuses on the fulfilment of the non-functional properties inherited from smart systems, such as real-time, energy-efficiency, communication quality and security, that are of paramount importance for many application domains such as smart cities, smart mobility and smart manufacturing.The research leading to these results has received funding from the European Union’s Horizon 2020 Programme under the ELASTIC Project (www.elastic-project.eu), grant agreement No 825473.Peer ReviewedPostprint (published version

    Resource management in a containerized cloud : status and challenges

    Get PDF
    Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research

    Arquitectura de microservicios para una plataforma de gestión remota para la cría de aves en pastoreo utilizando Amazon Web Services y redes inalámbricas de sensores de malla

    Get PDF
    Introduction: A variety of innovative solutions known as Precision Livestock Farming (PLF) technologies have been developed for the management of animal production industries, including Wireless Sensor Networks (WSN) for poultry farming. Problem: Current WSN-based systems for poultry farming lack the design of robust but flexible software architectures that ensure the integrity and proper delivery of data. Objective: Designing a microservice-based software architecture (MSA) for a multiplatform remote environmental management system based on Wireless Mesh Sensor Networks (WMSN) to be deployed in pastured poultry farming spaces. Methodology: A review about MSAs designed for animal farming was conducted, to synthesize key factors considered for the design process of the system data flow, microservice definition and the environmental monitoring system technology selection. Results: A cloud MSA with a multi layered scheme using the Amazon Web Services (AWS) platform was developed, validating the persistence of environmental data transmitted from WMSN prototype nodes to be deployed in mobile chicken coops. Conclusion: Defining an End-to-End data flow facilitates the organization of tasks by domains, allowing efficient event communication between components and network reliability both at the hardware and software levels. Originality: This study presents a novel design for a remote environmental monitoring system based on WMSN for mobile coops used in pastured poultry and a multi layered MSA cloud management platform for this specific type of food production industry. Limitations: Software architecture technology selection was based only on services offered, to the date of the study, in the free tier of the Amazon Web Service platform.Introducción: arquitectura de microservicios para una plataforma de gestión remota para la avicultura en pastoreo utilizando Amazon Web Services y redes inalámbricas de sensores de malla, Universidad Tecnológica de Panamá, 2023. Problema: las tecnologías de ganadería de precisión (PLF) ayudan a la gestión de las industrias de producción animal, como el uso de redes de sensores inalámbricos (WSN) en la cría de aves de corral. Los sistemas actuales basados en WSN para la cría de aves de corral carecen de arquitecturas de software sólidas pero flexibles para garantizar la integridad y la entrega adecuada de los datos. Objetivo: diseñar una arquitectura de software basada en microservicios (MSA) para un sistema de gestión ambiental remota basado en Wireless Mesh Sensor Networks (WMSN) para aves en pastoreo. Metodología: se realizó una revisión de MSA para la cría de animales para sintetizar los factores clave considerados en el proceso de diseño del flujo de datos del sistema, la definición de microservicios y la selección de tecnología del sistema de monitoreo ambiental. Resultados: se desarrolló un MSA en la nube con esquema multicapa utilizando la plataforma Amazon Web Services (AWS), validando la persistencia de datos ambientales de nodos prototipo WMSN para ser desplegados en gallineros móviles. Conclusión: definir un flujo de datos End-to-End facilita la organización de tareas por dominio, permitiendo una comunicación eficiente de eventos entre componentes y confiabilidad de la red tanto a nivel de hardware como de software. Originalidad: este estudio presenta un diseño novedoso para un sistema de monitoreo ambiental remoto basado en WMSN para cooperativas móviles utilizadas en aves de pastoreo y una plataforma de administración en la nube MSA de varias capas para esta industria.

    Modular architecture providing convergent and ubiquitous intelligent connectivity for networks beyond 2030

    Get PDF
    The transition of the networks to support forthcoming beyond 5G (B5G) and 6G services introduces a number of important architectural challenges that force an evolution of existing operational frameworks. Current networks have introduced technical paradigms such as network virtualization, programmability and slicing, being a trend known as network softwarization. Forthcoming B5G and 6G services imposing stringent requirements will motivate a new radical change, augmenting those paradigms with the idea of smartness, pursuing an overall optimization on the usage of network and compute resources in a zero-trust environment. This paper presents a modular architecture under the concept of Convergent and UBiquitous Intelligent Connectivity (CUBIC), conceived to facilitate the aforementioned transition. CUBIC intends to investigate and innovate on the usage, combination and development of novel technologies to accompany the migration of existing networks towards Convergent and Ubiquitous Intelligent Connectivity (CUBIC) solutions, leveraging Artificial Intelligence (AI) mechanisms and Machine Learning (ML) tools in a totally secure environment
    corecore