283 research outputs found
Component-aware Orchestration of Cloud-based Enterprise Applications, from TOSCA to Docker and Kubernetes
Enterprise IT is currently facing the challenge of coordinating the
management of complex, multi-component applications across heterogeneous cloud
platforms. Containers and container orchestrators provide a valuable solution
to deploy multi-component applications over cloud platforms, by coupling the
lifecycle of each application component to that of its hosting container. We
hereby propose a solution for going beyond such a coupling, based on the OASIS
standard TOSCA and on Docker. We indeed propose a novel approach for deploying
multi-component applications on top of existing container orchestrators, which
allows to manage each component independently from the container used to run
it. We also present prototype tools implementing our approach, and we show how
we effectively exploited them to carry out a concrete case study
Characterizing and providing interoperability to function as a service platforms
Dissertação para obtenção do Grau de Mestre em Engenharia Informática e de ComputadoresA computação sem servidor abstrai o controlo da infraestrutura dos programadores e executa código a pedido com escalonamento automático onde apenas se é cobrado pela quantidade de recursos consumidos. Um dos serviços mais populares da computação sem servidor é a Função como Serviço (Function-as-a-Service ou FaaS), onde os programadores são muitas vezes confrontados com requisitos específicos dos prestadores de
serviços de nuvem. Requisitos de assinatura das funções, e o uso de bibliotecas exclusivas ao prestador de serviços, foram identificados como sendo as principais causas de problemas de portabilidade das aplicações FaaS. O controlo reduzido da infraestrutura e a elevada dependência para com o prestador de serviços dá origem a diversos problemas de aprisionamento tecnológico. Neste trabalho, introduzimos o QuickFaaS, uma ferramenta para desktop de interoperabilidade multi-cloud com foco principal no desenvolvimento de funções agnósticas à nuvem e na criação das mesmas na respetiva plataforma. O QuickFaaS permite melhorar
substancialmente a produtividade, flexibilidade e agilidade no desenvolvimento de soluções sem servidor para múltiplos prestadores de serviços, sem o requisito de instalar software adicional. A abordagem agnóstica à nuvem irá permitir que os programadores reutilizem as suas funções em diferentes prestadores de serviços sem terem a necessidade de reescrever código. A solução visa a minimizar o aprisionamento tecnológico
nas plataformas FaaS através do aumento da portabilidade das funções sem servidor, incentivando assim programadores e organizações a apostarem em diferentes prestadores de serviços em troca de um benefício funcional.Serverless computing hides infrastructure management from developers and runs code on-demand automatically scaled and billed during code’s execution time. One of the most popular serverless backend services is called Function-as-a-Service (FaaS), in which developers are many times confronted with cloud-specific requirements. Function signature requirements, and the usage of custom libraries that are unique to cloud
providers, were identified as the two main reasons for portability issues in FaaS applications. Such reduced control over the infrastructure and tight-coupling with cloud services amplifies various vendor lock-in problems.
In this work, we introduce QuickFaaS, a multi-cloud interoperability desktop tool targeting cloud-agnostic functions development and FaaS deployments. QuickFaaS substantially improves developers’ productivity, flexibility and agility when creating serverless solutions to multiple cloud providers, without requiring the installation of extra software. The proposed cloud-agnostic approach enables developers to reuse their serverless functions in different cloud providers with no need to rewrite code. The solution aims to minimize vendor lock-in in FaaS platforms by increasing the portability of serverless functions, which will, therefore, encourage developers and organizations to target different providers in exchange for a functional benefit.N/
Orchestrator conversation : distributed management of cloud applications
Managing cloud applications is complex, and the current state of the art is not addressing this issue. The ever-growing software ecosystem continues to increase the knowledge required to manage cloud applications at a time when there is already an IT skills shortage. Solving this issue requires capturing IT operation knowledge in software so that this knowledge can be reused by system administrators who do not have it. The presented research tackles this issue by introducing a new and fundamentally different way to approach cloud application management: a hierarchical collection of independent software agents, collectively managing the cloud application. Each agent encapsulates knowledge of how to manage specific parts of the cloud application, is driven by sending and receiving cloud models, and collaborates with other agents by communicating using conversations. The entirety of communication and collaboration in this collection is called the orchestrator conversation. A thorough evaluation shows the orchestrator conversation makes it possible to encapsulate IT operations knowledge that current solutions cannot, reduces the complexity of managing a cloud application, and happens inherently concurrent. The evaluation also shows that the conversation figures out how to deploy a single big data cluster in less than 100 milliseconds, which scales linearly to less than 10 seconds for 100 clusters, resulting in a minimal overhead compared with the deployment time of at least 20 minutes with the state of the art
BPMN4sML: A BPMN Extension for Serverless Machine Learning. Technology Independent and Interoperable Modeling of Machine Learning Workflows and their Serverless Deployment Orchestration
Machine learning (ML) continues to permeate all layers of academia, industry
and society. Despite its successes, mental frameworks to capture and represent
machine learning workflows in a consistent and coherent manner are lacking. For
instance, the de facto process modeling standard, Business Process Model and
Notation (BPMN), managed by the Object Management Group, is widely accepted and
applied. However, it is short of specific support to represent machine learning
workflows. Further, the number of heterogeneous tools for deployment of machine
learning solutions can easily overwhelm practitioners. Research is needed to
align the process from modeling to deploying ML workflows.
We analyze requirements for standard based conceptual modeling for machine
learning workflows and their serverless deployment. Confronting the
shortcomings with respect to consistent and coherent modeling of ML workflows
in a technology independent and interoperable manner, we extend BPMN's
Meta-Object Facility (MOF) metamodel and the corresponding notation and
introduce BPMN4sML (BPMN for serverless machine learning). Our extension
BPMN4sML follows the same outline referenced by the Object Management Group
(OMG) for BPMN. We further address the heterogeneity in deployment by proposing
a conceptual mapping to convert BPMN4sML models to corresponding deployment
models using TOSCA.
BPMN4sML allows technology-independent and interoperable modeling of machine
learning workflows of various granularity and complexity across the entire
machine learning lifecycle. It aids in arriving at a shared and standardized
language to communicate ML solutions. Moreover, it takes the first steps toward
enabling conversion of ML workflow model diagrams to corresponding deployment
models for serverless deployment via TOSCA.Comment: 105 pages 3 tables 33 figure
Investigations into Elasticity in Cloud Computing
The pay-as-you-go model supported by existing cloud infrastructure providers
is appealing to most application service providers to deliver their
applications in the cloud. Within this context, elasticity of applications has
become one of the most important features in cloud computing. This elasticity
enables real-time acquisition/release of compute resources to meet application
performance demands. In this thesis we investigate the problem of delivering
cost-effective elasticity services for cloud applications.
Traditionally, the application level elasticity addresses the question of how
to scale applications up and down to meet their performance requirements, but
does not adequately address issues relating to minimising the costs of using
the service. With this current limitation in mind, we propose a scaling
approach that makes use of cost-aware criteria to detect the bottlenecks within
multi-tier cloud applications, and scale these applications only at bottleneck
tiers to reduce the costs incurred by consuming cloud infrastructure resources.
Our approach is generic for a wide class of multi-tier applications, and we
demonstrate its effectiveness by studying the behaviour of an example
electronic commerce site application.
Furthermore, we consider the characteristics of the algorithm for
implementing the business logic of cloud applications, and investigate the
elasticity at the algorithm level: when dealing with large-scale data under
resource and time constraints, the algorithm's output should be elastic with
respect to the resource consumed. We propose a novel framework to guide the
development of elastic algorithms that adapt to the available budget while
guaranteeing the quality of output result, e.g. prediction accuracy for
classification tasks, improves monotonically with the used budget.Comment: 211 pages, 27 tables, 75 figure
Dynamic cloud provisioning based on TOSCA
Cloud computing, today, is a ubiquitous paradigm. Its features such as availability of a practically infinite pool of computing resources, on demand, by using a pay-per-use model has resulted in its adoption by the industry for the realization of modern, sophisticated, and highly scalable IT applications. Such applications are often comprised of various components and services offered by different cloud service providers. This, in turn, raises two significant challenges- (i) automated provisioning and management, and (ii) interoperability and portability of the applications in a multi-cloud environment. As a result, the Topology and Orchestration Specification for Cloud Applications (TOSCA) standard was introduced by OASIS. This standard provides a metamodel to describe the topology of complex applications along with all the components, artifacts, and services in a single template that allows deploying the application in an interoperable and portable manner. In this Master thesis, we propose a concept that generates small and reusable TOSCA provisioning plans which can be orchestrated to deploy the overall application as opposed to using a monolithic provisioning plan. This goal is achieved in three steps - (i) splitting the application topology into a set of smaller sub-topologies, (ii) generating smaller plans called partial plans for each sub-topology, (iii) and finally orchestrating the partial plans to provision an instance of the application. Additionally, this concept enables the reuse of these plans for tasks such as scaling out individual components of the application. Finally, the feasibility of the proposed concept is demonstrated by a prototypical implementation developed using the OpenTOSCA framework
A Self-managed Mesos Cluster for Data Analytics with QoS Guarantees
[EN] This article describes the development of an automated configuration of a software platform for Data Analytics that supports horizontal and vertical elasticity to guarantee meeting a specific deadline. It specifies all the components, software dependencies and configurations required to build up the cluster, and analyses the deployment times of different instances, as well as the horizontal and vertical elasticity. The approach followed builds up self-managed hybrid clusters that can deal with different workloads and network requirements. The article describes the structure of the recipes, points out to public repositories where the code is available and discusses the limitations of the approach as well as the results of several experiments.The work presented in this article has been partially funded by a research grant from the regional government of the Comunitat Valenciana (Spain), co-funded by the European Union ERDF funds (European Regional Development Fund) of the Comunitat Valenciana 2014-2020, with reference IDIFEDER/2018/032 (High-Performance Algorithms for the Modelling, Simulation and early Detection of diseases in Personalized Medicine). The authors would also like to thank the Spanish "Ministerio de Economia, Industria y Competitividad" for the project "BigCLOE" with reference number TIN2016-79951-R.López-Huguet, S.; Pérez-González, AM.; Calatrava Arroyo, A.; Alfonso Laguna, CD.; Caballer Fernández, M.; Moltó, G.; Blanquer Espert, I. (2019). A Self-managed Mesos Cluster for Data Analytics with QoS Guarantees. Future Generation Computer Systems. 96:449-461. https://doi.org/10.1016/j.future.2019.02.047S4494619
Detection of microservice smells through static analysis
A arquitetura de microsserviços é um modelo arquitetural promissor na área de software,
atraindo desenvolvedores e empresas para os seus princípios convincentes. As suas vantagens
residem no potencial para melhorar a escalabilidade, a flexibilidade e a agilidade, alinhando se com as exigências em constante evolução da era digital. No entanto, navegar entre as
complexidades dos microsserviços pode ser uma tarefa desafiante, especialmente à medida
que este campo continua a evoluir.
Um dos principais desafios advém da complexidade inerente aos microsserviços, em que o seu
grande número e interdependências podem introduzir novas camadas de complexidade. Além
disso, a rápida expansão dos microsserviços, juntamente com a necessidade de aproveitar as
suas vantagens de forma eficaz, exige uma compreensão mais profunda das potenciais
ameaças e problemas que podem surgir. Para tirar verdadeiramente partido das vantagens
dos microsserviços, é essencial enfrentar estes desafios e garantir que o desenvolvimento e a
adoção de microsserviços sejam bem-sucedidos.
O presente documento pretende explorar a área dos smells da arquitetura de microsserviços
que desempenham um papel tão importante na dívida técnica dirigida à área dos
microsserviços.
Embarca numa exploração de investigação abrangente, explorando o domínio dos smells de
microsserviços. Esta investigação serve como base para melhorar um catálogo de smells de
microsserviços. Esta investigação abrangente obtém dados de duas fontes primárias:
systematic mapping study e um questionário a profissionais da área. Este último envolveu 31
profissionais experientes com uma experiência substancial no domínio dos microsserviços.
Além disso, são descritos o desenvolvimento e o aperfeiçoamento de uma ferramenta
especificamente concebida para identificar e resolver problemas relacionados com os
microsserviços. Esta ferramenta destina-se a melhorar o desempenho dos programadores
durante o desenvolvimento e a implementação da arquitetura de microsserviços.
Por último, o documento inclui uma avaliação do desempenho da ferramenta. Trata-se de
uma análise comparativa efetuada antes e depois das melhorias introduzidas na ferramenta.
A eficácia da ferramenta será avaliada utilizando o mesmo benchmarking de microsserviços
utilizado anteriormente, para além de outro benchmarking para garantir uma avaliação
abrangente.The microservices architecture stands as a beacon of promise in the software landscape,
drawing developers and companies towards its compelling principles. Its appeal lies in the
potential for improved scalability, flexibility, and agility, aligning with the ever-evolving
demands of the digital age. However, navigating the intricacies of microservices can be a
challenging task, especially as this field continues to evolve.
A key challenge arises from the inherent complexity of microservices, where their sheer
number and interdependencies can introduce new layers of intricacy. Furthermore, the rapid
expansion of microservices, coupled with the need to harness their advantages effectively,
demands a deeper understanding of the potential pitfalls and issues that may emerge. To
truly unlock the benefits of microservices, it is essential to address these challenges head-on
and ensure a successful journey in the world of microservices development and adoption.
The present document intends to explore the area of microservice architecture smells that
play such an important role in the technical debt directed to the area of microservices.
It embarks on a comprehensive research exploration, delving into the realm of microservice
smells. This research serves as the cornerstone for enhancing a microservice smell catalogue.
This comprehensive research draws data from two primary sources: a systematic mapping
research and an industry survey. The latter involves 31 seasoned professionals with
substantial experience in the field of microservices.
Moreover, the development and enhancement of a tool specifically designed to identify and
address issues related to microservices is described. This tool is aimed at improving
developers' performance throughout the development and implementation of microservices
architecture.
Finally, the document includes an evaluation of the tool's performance. This involves a
comparative analysis conducted before and after the tool's enhancements. The tool's
effectiveness will be assessed using the same microservice benchmarking as previously
employed, in addition to another benchmark to ensure a comprehensive evaluation
Fatias de rede fim-a-fim : da extração de perfis de funções de rede a SLAs granulares
Orientador: Christian Rodolfo Esteve RothenbergTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Nos últimos dez anos, processos de softwarização de redes vêm sendo continuamente diversi- ficados e gradativamente incorporados em produção, principalmente através dos paradigmas de Redes Definidas por Software (ex.: regras de fluxos de rede programáveis) e Virtualização de Funções de Rede (ex.: orquestração de funções virtualizadas de rede). Embasado neste processo o conceito de network slice surge como forma de definição de caminhos de rede fim- a-fim programáveis, possivelmente sobre infrastruturas compartilhadas, contendo requisitos estritos de desempenho e dedicado a um modelo particular de negócios. Esta tese investiga a hipótese de que a desagregação de métricas de desempenho de funções virtualizadas de rede impactam e compõe critérios de alocação de network slices (i.e., diversas opções de utiliza- ção de recursos), os quais quando realizados devem ter seu gerenciamento de ciclo de vida implementado de forma transparente em correspondência ao seu caso de negócios de comu- nicação fim-a-fim. A verificação de tal assertiva se dá em três aspectos: entender os graus de liberdade nos quais métricas de desempenho de funções virtualizadas de rede podem ser expressas; métodos de racionalização da alocação de recursos por network slices e seus re- spectivos critérios; e formas transparentes de rastrear e gerenciar recursos de rede fim-a-fim entre múltiplos domínios administrativos. Para atingir estes objetivos, diversas contribuições são realizadas por esta tese, dentre elas: a construção de uma plataforma para automatização de metodologias de testes de desempenho de funções virtualizadas de redes; a elaboração de uma metodologia para análises de alocações de recursos de network slices baseada em um algoritmo classificador de aprendizado de máquinas e outro algoritmo de análise multi- critério; e a construção de um protótipo utilizando blockchain para a realização de contratos inteligentes envolvendo acordos de serviços entre domínios administrativos de rede. Por meio de experimentos e análises sugerimos que: métricas de desempenho de funções virtualizadas de rede dependem da alocação de recursos, configurações internas e estímulo de tráfego de testes; network slices podem ter suas alocações de recursos coerentemente classificadas por diferentes critérios; e acordos entre domínios administrativos podem ser realizados de forma transparente e em variadas formas de granularidade por meio de contratos inteligentes uti- lizando blockchain. Ao final deste trabalho, com base em uma ampla discussão as perguntas de pesquisa associadas à hipótese são respondidas, de forma que a avaliação da hipótese proposta seja realizada perante uma ampla visão das contribuições e trabalhos futuros desta teseAbstract: In the last ten years, network softwarisation processes have been continuously diversified and gradually incorporated into production, mainly through the paradigms of Software Defined Networks (e.g., programmable network flow rules) and Network Functions Virtualization (e.g., orchestration of virtualized network functions). Based on this process, the concept of network slice emerges as a way of defining end-to-end network programmable paths, possibly over shared network infrastructures, requiring strict performance metrics associated to a par- ticular business case. This thesis investigate the hypothesis that the disaggregation of network function performance metrics impacts and composes a network slice footprint incurring in di- verse slicing feature options, which when realized should have their Service Level Agreement (SLA) life cycle management transparently implemented in correspondence to their fulfilling end-to-end communication business case. The validation of such assertive takes place in three aspects: the degrees of freedom by which performance of virtualized network functions can be expressed; the methods of rationalizing the footprint of network slices; and transparent ways to track and manage network assets among multiple administrative domains. In order to achieve such goals, a series of contributions were achieved by this thesis, among them: the construction of a platform for automating methodologies for performance testing of virtual- ized network functions; an elaboration of a methodology for the analysis of footprint features of network slices based on a machine learning classifier algorithm and a multi-criteria analysis algorithm; and the construction of a prototype using blockchain to carry out smart contracts involving service level agreements between administrative systems. Through experiments and analysis we suggest that: performance metrics of virtualized network functions depend on the allocation of resources, internal configurations and test traffic stimulus; network slices can have their resource allocations consistently analyzed/classified by different criteria; and agree- ments between administrative domains can be performed transparently and in various forms of granularity through blockchain smart contracts. At the end of his thesis, through a wide discussion we answer all the research questions associated to the investigated hypothesis in such way its evaluation is performed in face of wide view of the contributions and future work of this thesisDoutoradoEngenharia de ComputaçãoDoutor em Engenharia ElétricaFUNCAM
- …