11 research outputs found
Multi-Cloud Portable Application Deployment with VEP
Leveraging the plethora of available Infrastructure-as-a-Service (IaaS) solutions proves to be a hard task for users who should face the complexity of dealing with heterogeneous systems, either in terms of resources or APIs. Application portability is the mean to reduce the burden of adapting applications for specific IaaS types and escape po-tential vendor lock-in. The Virtual Execution Platform (VEP) is a cloud middleware software that interfaces multiple IaaS clouds and presents end-users with an interface facilitating deployment and life cycle man-agement of distributed applications made up of several inter-networked virtual machines. This paper presents the design of VEP and experi-mental results that evaluate its scalability in deploying applications on OpenNebula and OpenStack clouds
Using Open Standards for Interoperability - Issues, Solutions, and Challenges facing Cloud Computing
Virtualization offers several benefits for optimal resource utilization over
traditional non-virtualized server farms. With improvements in internetworking
technologies and increase in network bandwidth speeds, a new era of computing
has been ushered in, that of grids and clouds. With several commercial cloud
providers coming up, each with their own APIs, application description formats,
and varying support for SLAs, vendor lock-in has become a serious issue for end
users. This article attempts to describe the problem, issues, possible
solutions and challenges in achieving cloud interoperability. These issues will
be analyzed in the ambit of the European project Contrail that is trying to
adopt open standards with available virtualization solutions to enhance users'
trust in the clouds by attempting to prevent vendor lock-ins, supporting and
enforcing SLAs together with adequate data protection for sensitive data
Including Security Monitoring in Cloud SLA
International audienceOne of the risks of moving to a public cloud is losing full control of the information system infrastructure. The service provider will be in charge of monitoring the actual infrastructure and provide the required service to clients. In our work, we aim to allow providers to provide customers with guarantees on security monitoring of their outsourced information system
Contrôle des applications fondé sur la qualité de service pour les plate-formes logicielles dématérialisées (Cloud)
Cloud computing is a new computing model. Infrastructure, application and data are moved from local machines to internet and provided as services. Cloud users, such as application owners, can greatly save budgets from the elasticity feature, which refers to the “pay as you go” and on-demand characteristics, of cloud service. The goal of this thesis is to manage the Quality of Service (QoS) for applications running in cloud environments Cloud services provide application owners with great flexibility to assign “suitable” amount of resources according to the changing needs, for example caused by fluctuating request rate. “Suitable” or not needs to be clearly documented in Service Level Agreements (SLA) if this resource demanding task is hosted in a third party, such as a Platform as a Service (PaaS) provider. In this thesis, we propose and formally describe PSLA, which is a SLA description language for PaaS. PSLA is based on WS-Agreement, which is extendable and widely accepted as a SLA description language. Before signing the SLA contract, negotiations are unavoidable. During negotiations, the PaaS provider needs to evaluate if the SLA drafts are feasible or not. These evaluations are based on the analysis of the behavior of the application deployed in the cloud infrastructure, for instance throughput of served requests, response time, etc. Therefore, application dependent analysis, such as benchmark, is needed. Benchmarks are relatively costly and precise feasibility study usually imply large amount of benchmarks. In this thesis, we propose a benchmark based SLA feasibility study method to evaluate whether or not a SLA expressed in PSLA, including QoS targets, resource constraints, cost constraints and workload constraints can be achieved. This method makes tradeoff between the accuracy of a SLA feasibility study and benchmark costs. The intermediate of this benchmark based feasibility study process will be used as the workload-resource mapping model of our runtime control method. When application is running in a cloud infrastructure, the scalability feature of cloud infrastructures allows us to allocate and release resources according to changing needs. These resource provisioning activities are named runtime control. We propose the Runtime Control method based onSchedule, REactive and PROactive methods (RCSREPRO). Changing needs are mainly caused by the fluctuating workload for majority of the applications running in the cloud. The detailed workload information, for example the request arrival rates at scheduled points in time, is difficult to be known before running the application. Moreover, workload information listed in PSLA is too rough to give a fitted resource provisioning schedule before runtime. Therefore, runtime control decisions are needed to be performed in real time. Since resource provisioning actions usually require several minutes, RCSREPRO performs a proactive runtime control which means that it predicts future needs and assign resources in advance to have them ready when they are needed. Hence, prediction of the workload and workload-resource mapping are two problems involved in proactive runtime control. The workload-resource mapping model, which is initially derived from benchmarks in SLA feasibility study is continuously improved in a feedback way at runtime, increasing the accuracy of the control.To sum up, we contribute with three aspects to the QoS management of application running in the cloud: creation of PSLA, a PaaS level SLA description language; proposal of a benchmark based SLA feasibility study method; proposal of a runtime control method, RCSREPRO, to ensure the SLA when the application is running. The work described in this thesis is motivated and funded by the FSN OpenCloudware project (www.opencloudware.org).Le « Cloud computing » est un nouveau modèle de systèmes de calcul. L’infrastructure, les applications et les données sont déplacées de machines localisées sur des systèmes dématérialisés accédés sous forme de service via Internet. Le modèle « coût à l’utilisation » permet des économies de coût en modifiant la configuration à l’exécution (élasticité). L’objectif de cette thèse est de contribuer à la gestion de la Qualité de Service (QdS) des applications s’exécutant dans le Cloud. Les services Cloud prétendent fournir une flexibilité importante dans l’attribution des ressources de calcul tenant compte des variations perçues, telles qu’une fluctuation de la charge. Les capacités de variation doivent être précisément exprimées dans un contrat (le Service Level Agreement, SLA) lorsque l’application est hébergée par un fournisseur de Plateform as a Service (PaaS). Dans cette thèse, nous proposons et nous décrivons formellement le langage de description de SLA PSLA. PSLA est fondé sur WS-Agreement qui est lui-même un langage extensible de description de SLA. Des négociations préalables à la signature du SLA sont indispensables, pendant lesquelles le fournisseur de PaaS doit évaluer la faisabilité des termes du contrat. Cette évaluation, par exemple le temps de réponse, le débit maximal de requêtes servies, etc, est fondée sur une analyse du comportement de l’application déployée dans l’infrastructure cloud. Une analyse du comportement de l’application est donc nécessaire et habituellement assurée par des tests (benchmarks). Ces tests sont relativement coûteux et une étude précise de faisabilité demande en général de nombreux tests. Dans cette thèse, nous proposons une méthode d’étude de faisabilité concernant les critères de performance, à partir d’une proposition de SLA exprimée en PSLA. Cette méthode est un compromis entre la précision d’une étude exhaustive de faisabilité et les coûts de tests. Les résultats de cette étude constituent le modèle initial de la correspondance charge entrante-allocation de ressources utilisée à l’exécution. Le contrôle à l’exécution (runtime control) d’une application gère l’allocation de ressources en fonction des besoins, s’appuyant en particulier sur les capacités de passage à l’échelle (scalability) des infrastructures de cloud. Nous proposons RCSREPRO (Runtime Control method based on Schedule, REactive and PROactive methods), une méthode de contrôle à l’exécution fondée sur la planification et des contrôles réactifs et prédictifs. Les besoins d’adaptation à l’exécution sont essentiellement dus à une variation de la charge soumise à l’application, variations difficiles à estimer avant exécution et seulement grossièrement décrites dans le SLA. Il est donc nécessaire de reporter à l’exécution les décisions d’adaptation et d’y évaluer les possibles variations de charge. Comme les actions de modification des ressources attribuées peuvent prendre plusieurs minutes, RCSREPRO réalise un contrôle prédictif fondée sur l’analyse de charge et la correspondance indicateurs de performance-ressources attribuées, initialement définie via des tests. Cette correspondance est améliorée en permanence à l’exécution. En résumé, les contributions de cette thèse sont la proposition de langage PSLA pour décrire les SLA ; une proposition de méthode pour l’étude de faisabilité d’un SLA ; une proposition de méthode (RCSREPRO) de contrôle à l’exécution de l’application pour garantir le SLA. Les travaux de cette thèse s’inscrivent dans le contexte du projet FSN OpenCloudware (www.opencloudware.org) et ont été financés en partie par celui-ci
Interoperabilnost uslužnog računarstva pomoću aplikacijskih programskih sučelja
Cloud computing paradigm is accepted by an increasing number of organizations due to significant financial savings. On the other hand, there are some issues that hinder cloud adoption. One of the most important problems is the vendor lock-in and lack of interoperability as its outcome. The ability to move data and application from one cloud offer to another and to use resources of multiple clouds is very important for cloud consumers.The focus of this dissertation is on the interoperability of commercial providers of platform as a service. This cloud model was chosen due to many incompatibilities among vendors and lack of the existing solutions. The main aim of the dissertation is to identify and address interoperability issues of platform as a service. Automated data migration between different providers of platform as a service is also an objective of this study.The dissertation has the following main contributions: first, the detailed ontology of resources and remote API operations of providers of platform as a service was developed. This ontology was used to semantically annotate web services that connect to providers remote APIs and define mappings between PaaS providers. A tool that uses defined semantic web services and AI planning technique to detect and try to resolve found interoperability problems was developed. The automated migration of data between providers of platform as a service is presented. Finally, a methodology for the detection of platform interoperability problems was proposed and evaluated in use cases.Zbog mogućnosti financijskih ušteda, sve veći broj poslovnih organizacija razmatra korištenje ili već koristi uslužno računarstvo. Međutim, postoje i problemi koji otežavaju primjenu ove nove paradigme. Jedan od najznačajnih problema je zaključavanje korisnika od strane pružatelja usluge i nedostatak interoperabilnosti. Za korisnike je jako važna mogućnost migracije podataka i aplikacija s jednog oblaka na drugi, te korištenje resursa od više pružatelja usluga.Fokus ove disertacije je interoperabilnost komercijalnih pružatelja platforme kao usluge. Ovaj model uslužnog računarstva je odabran zbog nekompatibilnosti različitih pružatelja usluge i nepostojanja postojećih rješenja. Glavni cilj disertacije je identifikacija i rješavanje problema interoperabilnosti platforme kao usluge. Automatizirana migracija podataka između različitih pružatelja platforme kao usluge je također jedan od ciljeva ovog istraživanja.Znanstveni doprinos ove disertacije je sljedeći: Najprije je razvijena detaljna ontologija resursa i operacija iz aplikacijskih programskih sučelja pružatelja platforme kao usluge. Spomenuta ontologija se koristi za semantičko označavanje web servisa koji pozivaju udaljene operacije aplikacijskih programskih sučelja pružatelja usluga, a sama ontologija definira i mapiranja između pružatelja platforme kao usluge. Također je razvijen alat koji otkriva i pokušava riješiti probleme interoperabilnosti korištenjem semantičkih web servisa i tehnika AI planiranja. Prikazana je i arhitektura za automatiziranu migraciju podataka između različitih pružatelja platforme kao usluge. Na kraju je predložena metodologija za otkrivanje problema interoperabilnosti koja je evaluirana pomoću slučajeva korištenja
Interoperabilnost uslužnog računarstva pomoću aplikacijskih programskih sučelja
Cloud computing paradigm is accepted by an increasing number of organizations due to significant financial savings. On the other hand, there are some issues that hinder cloud adoption. One of the most important problems is the vendor lock-in and lack of interoperability as its outcome. The ability to move data and application from one cloud offer to another and to use resources of multiple clouds is very important for cloud consumers.The focus of this dissertation is on the interoperability of commercial providers of platform as a service. This cloud model was chosen due to many incompatibilities among vendors and lack of the existing solutions. The main aim of the dissertation is to identify and address interoperability issues of platform as a service. Automated data migration between different providers of platform as a service is also an objective of this study.The dissertation has the following main contributions: first, the detailed ontology of resources and remote API operations of providers of platform as a service was developed. This ontology was used to semantically annotate web services that connect to providers remote APIs and define mappings between PaaS providers. A tool that uses defined semantic web services and AI planning technique to detect and try to resolve found interoperability problems was developed. The automated migration of data between providers of platform as a service is presented. Finally, a methodology for the detection of platform interoperability problems was proposed and evaluated in use cases.Zbog mogućnosti financijskih ušteda, sve veći broj poslovnih organizacija razmatra korištenje ili već koristi uslužno računarstvo. Međutim, postoje i problemi koji otežavaju primjenu ove nove paradigme. Jedan od najznačajnih problema je zaključavanje korisnika od strane pružatelja usluge i nedostatak interoperabilnosti. Za korisnike je jako važna mogućnost migracije podataka i aplikacija s jednog oblaka na drugi, te korištenje resursa od više pružatelja usluga.Fokus ove disertacije je interoperabilnost komercijalnih pružatelja platforme kao usluge. Ovaj model uslužnog računarstva je odabran zbog nekompatibilnosti različitih pružatelja usluge i nepostojanja postojećih rješenja. Glavni cilj disertacije je identifikacija i rješavanje problema interoperabilnosti platforme kao usluge. Automatizirana migracija podataka između različitih pružatelja platforme kao usluge je također jedan od ciljeva ovog istraživanja.Znanstveni doprinos ove disertacije je sljedeći: Najprije je razvijena detaljna ontologija resursa i operacija iz aplikacijskih programskih sučelja pružatelja platforme kao usluge. Spomenuta ontologija se koristi za semantičko označavanje web servisa koji pozivaju udaljene operacije aplikacijskih programskih sučelja pružatelja usluga, a sama ontologija definira i mapiranja između pružatelja platforme kao usluge. Također je razvijen alat koji otkriva i pokušava riješiti probleme interoperabilnosti korištenjem semantičkih web servisa i tehnika AI planiranja. Prikazana je i arhitektura za automatiziranu migraciju podataka između različitih pružatelja platforme kao usluge. Na kraju je predložena metodologija za otkrivanje problema interoperabilnosti koja je evaluirana pomoću slučajeva korištenja
Uma arquitetura para a gerência da migração de máquinas virtuais em ambientes de computação em nuvem
Orientador: Carlos Alberto MazieroTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 29/10/2020Inclui referências: p. 78-83Área de concentração: Ciência da ComputaçãoResumo: A virtualização oferece uma relevante contribuição para a consolidação dos propósitos das nuvens computacionais. Serviços em Nuvem (Cloud Services - CSs) surgiram com a perspectiva de atender à crescente variedade de perfis de consumidores. Nesse contexto, as Máquinas Virtuais (VMs) possuem um papel central, dada a flexibilidade decorrente do mapeamento de recursos físicos em virtuais. No entanto, quanto mais disseminada a utilização de CSs, maior é a tendência da sobrecarga dos recursos. Em decorrência, episódios de não cumprimento dos objetivos traçados em acordos (SLAs) tendem a ocorrer com maior frequência. Ações corretivas apenas encontram alguma solução quando associadas a esforços de gerência dos recursos, tais como a migração de VMs. Os hipervisores são essenciais no planejamento e na execução das migrações. Entretanto, embora promovam a garantia do isolamento de aplicações e, por consequência no controle do uso de recursos, seu espectro de ação é limitado. Capacidades de gerência que integrem e coordenem o uso de recursos ainda são incipientes. É, portanto, relevante considerar uma solução que potencialize as contribuições da migração de VMs. Tal solução requer uma orquestração entre os distintos níveis de decisão envolvidos em uma ação de migração: no nível operacional, a ação dos hipervisores precisa ser coordenada; no nível decisório, decisões necessitam ser guiadas pelo suporte estratégico de políticas que são delineadas em um nível de controle e gerência da migração. Esta tese apresenta uma solução para a gerência da migração de VMs, destacando um modelo de arquitetura que possibilita atingir tal propósito. Simulações possibilitaram a avaliação da arquitetura, destacando que a adoção das ações de gerência da arquitetura tornam os resultados de uma migração de VMs mais perenes, diminuindo novas instabilidades que, usualmente, indicariam a necessidade de novas migrações. Palavras-chave: Nuvens computacionais, Migração de máquinas virtuais, Máquinas Virtuais.Abstract: Virtualization offers a relevant contribution to the realm of Cloud Computing. Cloud Services (CSs) are a relevant approach, aiming to achieve several consumer profiles. Within such context, Virtual Machines (VMs) play a core role, regarding their flexibility due the easiness for mapping a physical resource in a virtual correspondent. However, as CSs are instantiated, the larger is the amount of resources required. The correct of issues can be successful if they are associated with resource management actions, such as the migration of VMs. However, the migration demands management efforts that are complex. Although hypervisors play an important role for the management, its actions are limited. In fact, there is still a lack of management actions that enable the integration and coordination of resource usage in cloud domains. As a consequence, it is important to consider a solution that enhances the contribution of the migration of VMs in the management of cloud resources. Such a contribution demands the coordination of distinct decision levels involved in a migration. In the operational level, it is necessary to coordinate distinct hypervisors; for the decision level, the support of both strategical and technical information are crucial to present a comprehensive approach to manage migrations. This thesis presents one solution to the management of VM migrations, proposing an architecture that meet that purpose. Simulations were executed to evaluate the architecture, highlighting that the adoption of management actions within the architecture aid in the stabilization of migration results, e.g., reducing that new instabilities arose as well as new migration needs. Keywords: Clouds, Migration of virtual machines, Virtual Machine
Recommended from our members
Runtime monitoring of security SLAs for big data pipelines: design implementation and evaluation of a framework for monitoring security SLAs in big data pipelines with the assistance of run-time code instrumentation
The Big Data processing ecosystem has been constantly growing in recent years. This has been significantly reinforced by the advent of cloud computing platforms where Big Data analytics can be offered on an as-a-service basis. The ease with which users can leverage the capabilities of Big Data processing frameworks in the cloud has made them a popular solution with low up-front expenditure and a flexible deployment model. In spite of their cost benefits and flexibility of use, Big Data services in cloud platforms present us with an array of new challenges compared to traditional web services especially in the domain of data security and privacy. Their distributed nature makes them more dynamic with regards to deployment and execution but at the same time it exacerbates challenges related to data and operation security since both data and operations are shared across multiple nodes. Inevitably, distributing data and operations on multiple nodes leads to an increase in the attack surface. Given the need for systems that react fast and produce results as quickly as possible, more emphasis has been placed on performance and less so on security. Having said that, as the use of cloud computing is becoming more widespread, concerns with regards to non-functional properties such as data security are becoming more pronounced for the users. Runtime security monitoring is a mechanism that can be employed to alleviate some of the issues that emerge with respect to the activity of security monitoring for Big Data analytics services that are outsourced in the cloud. In this thesis we make the case for a monitoring framework where monitoring events are collected and evaluated against a set of monitoring rules that describe monitorable security properties of the system. The framework that we put forward can be used to assess the level of security of Big Data analytics pipelines at runtime. For our proof of concept we examine three security properties namely the service response time, the location of execution of service operations and the integrity of the intermediate data produced during the service execution
Managing OVF applications under SLA constraints on Contrail Virtual Execution Platform
International audienceThe move of users and organizations to Cloud computing will become possible when they will be able to exploit their own applications, applications and services provided by cloud providers as well as applications from third party providers in a trustful way on different cloud infrastructures. To reach this goal, standard application formats must be enabled on the cloud to avoid vendor-lockin, and guarantees concerning protection, performance and security supported. This article describes the Contrail VEP component developed by the Contrail project. The VEP component is in charge of managing the whole life cycle of OVF distributed applications under Service Level Agreement rules on different infrastructure providers
Software-Defined Networking: A Comprehensive Survey
peer reviewedThe Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment