15 research outputs found
Contribución a la estimulación del uso de soluciones Cloud Computing: Diseño de un intermediador de servicios Cloud para fomentar el uso de ecosistemas distribuidos digitales confiables, interoperables y de acuerdo a la legalidad. Aplicación en entornos multi-cloud.
184 p.El objetivo del trabajo de investigación presentado en esta tesis es facilitar a los desarrolladores y operadores de aplicaciones desplegadas en múltiples Nubes el descubrimiento y la gestión de los diferentes servicios de Computación, soportando su reutilización y combinación, para generar una red de servicios interoperables, que cumplen con las leyes y cuyos acuerdos de nivel de servicio pueden ser evaluados de manera continua. Una de las contribuciones de esta tesis es el diseño y desarrollo de un bróker de servicios de Computación llamado ACSmI (Advanced Cloud Services meta-Intermediator). ACSmI permite evaluar el cumplimiento de los acuerdos de nivel de servicio incluyendo la legislación. ACSmI también proporciona una capa de abstracción intermedia para los servicios de Computación donde los desarrolladores pueden acceder fácilmente a un catálogo de servicios acreditados y compatibles con los requisitos no funcionales establecidos.Además, este trabajo de investigación propone la caracterización de las aplicaciones nativas multiNube y el concepto de "DevOps extendido" especialmente pensado para este tipo de aplicaciones. El concepto "DevOps extendido" pretende resolver algunos de los problemas actuales del diseño, desarrollo, implementación y adaptación de aplicaciones multiNube, proporcionando un enfoque DevOps novedoso y extendido para la adaptación de las prácticas actuales de DevOps al paradigma multiNube
SLA Violation Detection Model and SLA Assured Service Brokering (SLaB) in Multi-Cloud Architecture
Cloud brokering facilitates CSUs to find cloud services according to their requirements. In the current practice, CSUs or Cloud Service Brokers (CSBs) select cloud services according to SLA committed by CSPs in their website. In our observation, it is found that most of the CSPs do not fulfill the service commitment mentioned in the SLA agreement. Verified cloud service performances against their SLA commitment of CSPs provide an additional trust on CSBs to recommend services to the CSUs. In this thesis work, we propose a SLA assured service-brokering framework, which considers both committed and delivered SLA by CSPs in cloud service recommendation to the users.
For the evaluation of the performance of CSPs, two evaluation techniques: Heat Map and IFL are proposed, which include both directly measurable and non-measurable parameters in the performance evaluation CSPs. These two techniques are implemented using real data measured from CSPs. The result shows that Heat Map technique is more transparent and consistent in CSP performance evaluation than IFL technique. In this work, regulatory compliance of the CSPs is also analyzed and visualized in performance heat map table to provide legal status of CSPs. Moreover, missing points in their terms of service and SLA document are analyzed and recommended to add in the contract document. In the revised European GPDR, DPIA is going to be mandatory for all organizations/tools. The decision recommendation tool developed using above mentioned evaluation techniques may cause potential harm to individuals in assessing data from multiple CSPs. So, DPIA is carried out to assess the potential harm/risks to individuals due to our tool and necessary precaution to be taken in the tool to minimize possible data privacy risks. It also analyzes the service pattern and future performance behavior of CSPs to help CSUs in decision making to select appropriate CSP
Teadusarvutuse algoritmide taandamine hajusarvutuse raamistikele
Teadusarvutuses kasutatakse arvuteid ja algoritme selleks, et lahendada probleeme erinevates reaalteadustes nagu geneetika, bioloogia ja keemia. Tihti on eesmärgiks selliste loodusnähtuste modelleerimine ja simuleerimine, mida päris keskkonnas oleks väga raske uurida.
Näiteks on võimalik luua päikesetormi või meteoriiditabamuse mudel ning arvutisimulatsioonide abil hinnata katastroofi mõju keskkonnale. Mida keerulisemad ja täpsemad on sellised simulatsioonid, seda rohkem arvutusvõimsust on vaja. Tihti kasutatakse selleks suurt hulka arvuteid, mis kõik samaaegselt töötavad ühe probleemi kallal. Selliseid arvutusi nimetatakse paralleel- või hajusarvutusteks.
Hajusarvutuse programmide loomine on aga keeruline ning nõuab palju rohkem aega ja ressursse, kuna vaja on sünkroniseerida erinevates arvutites samaaegselt tehtavat tööd. On loodud mitmeid tarkvararaamistikke, mis lihtsustavad seda tööd automatiseerides osa hajusprogrammeerimisest.
Selle teadustöö eesmärk oli uurida selliste hajusarvutusraamistike sobivust keerulisemate teadusarvutuse algoritmide jaoks. Tulemused näitasid, et olemasolevad raamistikud on üksteisest väga erinevad ning neist ükski ei ole sobiv kõigi erinevat tüüpi algoritmide jaoks. Mõni raamistik on sobiv ainult lihtsamate algoritmide jaoks; mõni ei sobi olukorras, kus andmed ei mahu arvutite mällu. Algoritmi jaoks kõige sobivama hajusarvutisraamistiku valimine võib olla väga keeruline ülesanne, kuna see nõuab olemasolevate raamistike uurimist ja rakendamist.
Sellele probleemile lahendust otsides otsustati luua dünaamiline algoritmide modelleerimise rakendus (DAMR), mis oskab simuleerida algoritmi implementatsioone erinevates hajusarvutusraamistikes. DAMR aitab hinnata milline hajusraamistik on kõige sobivam ette antud algoritmi jaoks, ilma algoritmi reaalselt ühegi hajusraamistiku peale implementeerimata.
Selle uurimustöö peamine panus on hajusarvutusraamistike kasutuselevõtu lihtsamaks tegemine teadlastele, kes ei ole varem nende kasutamisega kokku puutunud. See peaks märkimisväärselt aega ja ressursse kokku hoidma, kuna ei pea ükshaaval kõiki olemasolevaid hajusraamistikke tundma õppima ja rakendama.Scientific computing uses computers and algorithms to solve problems in various sciences such as genetics, biology and chemistry. Often the goal is to model and simulate different natural phenomena which would otherwise be very difficult to study in real environments.
For example, it is possible to create a model of a solar storm or a meteor hit and run computer simulations to assess the impact of the disaster on the environment. The more sophisticated and accurate the simulations are the more computing power is required. It is often necessary to use a large number of computers, all working simultaneously on a single problem. These kind of computations are called parallel or distributed computing.
However, creating distributed computing programs is complicated and requires a lot more time and resources, because it is necessary to synchronize different computers working at the same time. A number of software frameworks have been created to simplify this process by automating part of a distributed programming.
The goal of this research was to assess the suitability of such distributed computing frameworks for complex scientific computing algorithms. The results showed that existing frameworks are very different from each other and none of them are suitable for all different types of algorithms. Some frameworks are only suitable for simple algorithms; others are not suitable when data does not fit into the computer memory. Choosing the most appropriate distributed computing framework for an algorithm can be a very complex task, because it requires studying and applying the existing frameworks.
While searching for a solution to this problem, it was decided to create a Dynamic Algorithms Modelling Application (DAMA), which is able to simulate the implementation of the algorithm in different distributed computing frameworks. DAMA helps to estimate which distributed framework is the most appropriate for a given algorithm, without actually implementing it in any of the available frameworks.
This main contribution of this study is simplifying the adoption of distributed computing frameworks for researchers who are not yet familiar with using them. It should save significant time and resources as it is not necessary to study each of the available distributed computing frameworks in detail
Framework for Automated Partitioning of Scientific Workflows on the Cloud
Teaduslikud töövood on saanud populaarseks standardiks, et lihtsal viisil esitada ning lahendada erinevaid teaduslikke ülesandeid. Üldiselt koosnevad need töövood suurtest hulkadest ülesannetest, mis nõuavad tihti palju erinevaid arvuti ressursse, mistõttu jooksutatakse neid kas pilvearvutust, hajustöötlust või superarvuteid kasutades. Varem on tõestatud, et kui rakendada pilves töövoo erinevate osade jagamiseks k-way partitsioneerimis algoritmi, siis üleüldine kommunikatsioon pilves väheneb. Antud magistritöös programmeriti raamistik, et seda protsessi automatiseerida. Loodud raamistik võimaldab automaatselt partitsioneerida igasugusegi töövoo, mis on mõeldud Pegasuse programmiga jooksutamiseks. Raamistik, kasutades CloudML'i, seab automaatselt pilves üles klastri masinaid, konfigureerib ning sätestab kõik vajaliku tarkvara ning jooksutab ja partitsioneerib etteantud töövoo. Lisaks, kuvatakse pärast töövoo lõpetamist ka ajalise kalkulatsiooni visualisatsioon. Seda kasutades saab lõppkasutaja aimu, mitu tuuma peaks töövoo jooksutamiseks kasutama, et lõpetada eksperiment mingis kindlas ajavahemikus.Scientific workflows have become a standardized way for scientists to represent a set of tasks to overcome or solve a certain problem. Usually these workflows consist of numerous amount of jobs that are both CPU heavy and I/O intensive that are executed using some kind of workflow management system either on clouds, grids, supercomputers, etc. Previously, it has been shown that using k-way partitioning algorithm to distribute a workflow's tasks between multiple machines in the cloud reduces the overall data communication and therefore lowers the cost of the bandwidth usage. In this thesis, a framework was built in order to automate this process - partition any workflow submitted by a scientist that is meant to be run on Pegasus workflow management system in the cloud with ease. The framework provisions the instances in the cloud using CloudML, configures and installs all the software needed for the execution, runs and partitions the scientific workflow and finally shows the time estimation of the workflow, so that the user would have an approximate guidelines on, how many resources one should provision in order to finish an experiment under a certain time-frame
Envisage: Developing SLA-aware Deployed Services with Formal Methods
International audienceInsufficient scalability and bad resource management of software services can easily eat up any potential savings from cloud deployment. Failed service-level agreements (SLAs) cause penalties for the provider, while oversized SLAs waste resources on the customer's side. IBM Systems Sciences Institute estimates that a defect which costs one unit to fix in design, costs 15 units to fix in testing (system/acceptance) and 100 units or more to fix in production [6]; this cost estimation does not even consider the impact cost due to, for example, delayed time to market, lost revenue, lost customers, and bad public relations. The Envisage project aims at shifting deployment decisions from the end of the software engineering process to become an integral part of software design [2]. Deployment on the cloud gives software designers far reaching control over the resource parameters of the execution environment, such as the number and kind of processors, the amount of memory and storage capacity, and the bandwidth. In this context, designers can also control their software's trade-offs between the incurred cost and the delivered quality-of-service. SLA-aware services, which are designed for scalability, can even change these parameters dynamically, at runtime, to meet their service contracts. Envisage permits to design and validate these services by connecting executable models to formal service contracts and an API that is an abstraction of the cloud environment, see Fig. 1. This approach enables new kinds of analysis: – Simulation (" Early modeling "): The formally defined modeling language ABS [10] realizes a separation of concerns between the cost of execution and the capacity of dynamically provisioned cloud resources [11]. Models are executable; a simulation tool supports rapid prototyping and visualization. – Formal methods (" Early analysis "): as ABS was designed for analysis, it enables a range of tool-supported formal techniques, including behavioral types for deadlock analysis and SLA compliance [8], worst-case cost analysis [1], deductive verification [7], and automated test-case generation [4]. – Monitoring (" Late analysis "): ABS supports code generation backends [5] that preserve upper bounds on cost and permit performance monitoring of the provisioned cloud resources after deployment [13]
Eine musterbasierte Methode zur Automatisierung des Anwendungsmanagements
Das Management laufender Geschäftsanwendungen gehört zu den kritischen Vorgängen des IT-Betriebs, da Unachtsamkeiten zu fehlerhaften Anwendungszuständen und zum Ausfall ganzer Anwendungslandschaften führen können. Speziell die manuelle Managementdurchführung birgt aufgrund zunehmend unüberschaubarer Anwendungsstrukturen, unbekannter Abhängigkeiten zwischen Komponenten, unzureichender Dokumentation sowie komplexer Managementwerkzeuge ein stetig größer werdendes wirtschaftliches Risiko. Aus diesem Grund wurden in den vergangenen Jahren neue Paradigmen und eine große Anzahl an Technologien zur Automatisierung des Anwendungsmanagements entwickelt. Die zunehmende Komplexität von Architekturen durch verteilte Systeme, Virtualisierung von Komponenten, Cloud Computing und dem aufkommenden Internet der Dinge erfordert jedoch immer häufiger die Kombination mehrerer dieser Managementtechnologien, um übergeordnete Managementziele bezüglich eines komplexen IT-Systems umzusetzen. Dabei treten sowohl (i) konzeptionelle als auch (ii) technische Fragestellungen auf, die von unterschiedlichen Expertengruppen in Kollaboration analysiert und gelöst werden müssen. Die Integration dieser beiden Abstraktionsebenen stellt dabei eine grundlegende Herausforderung dar, die im gegenwärtigen Anwendungsmanagement aufgrund fehlender durchgängiger Automatisierungsmöglichkeiten zumeist unter hohem Aufwand individuell angenommen werden muss. Um diese Automatisierungslücke zwischen den beiden Abstraktionsebenen zu schließen, stellt diese Arbeit eine hybride, musterbasierte Managementmethode namens PALMA vor. Die Methode kombiniert das deklarative mit dem imperativen Managementparadigma und ermöglicht dadurch die automatisierte Anwendung generischer Managementmuster. Häufig auftretende Managementprobleme, wie beispielsweise die Migration einer Anwendungskomponente in eine Cloud-Umgebung unter Wahrung deren Verfügbarkeit, können mittels automatisierten Managementmustern effizient für individuelle Anwendungen gelöst und die zugehörigen Prozesse automatisiert ausgeführt werden. Die Methode unterstützt die Kollaboration von Experten und kann manuell, semi-automatisiert sowie vollständig automatisiert angewendet werden. Zur Umsetzung der Methode wird eine deklarative Sprache namens DMMN vorgestellt, welche die Modellierung von Managementaufgaben auf hoher deklarativer Abstraktionsebene unter Ausblendung technischer Ausführungsdetails ermöglicht. Bei der automatisierten Ausführung von Managementmustern werden deklarative Managementmodelle in dieser Sprache generiert, welche die jeweilige Musterlösung für die betroffene Anwendung in Form der auszuführenden Managementaufgaben abstrakt spezifizieren. Zu deren Ausführung wird ein Transformationsverfahren vorgestellt, das deklarative Managementmodelle in ausführbare, imperative Prozessmodelle übersetzt. Die Generierung dieser Prozessmodelle basiert dabei auf der Orchestrierung wiederverwendbarer Managementbausteine, die in Form von Subprozessen modelliert sind und als Management-Planlets bezeichnet werden. Durch diese Transformation werden die Stärken beider Paradigmen vereint und umfangreiches Management komplexer IT-Systeme ermöglicht. Zudem wird ein musterbasiertes Verfahren vorgestellt, mithilfe dessen Managementaufgaben in deklarativen Managementmodellen automatisch auf Probleme analysiert und diesbezüglich korrigiert werden können. Dadurch wird die Korrektheit der Durchführung gewährleistet und der Systemadministrator bei der Modellierung der Modelle unterstützt. Die in dieser Arbeit vorgestellten Konzepte werden im Rahmen des sogenannten PALMA-Frameworks prototypisch implementiert, um die praktische Umsetzbarkeit der theoretischen Konzepte und Ansätze zu validieren
Orthogonal variability modeling to support multi-cloud application configuration
Cloud service providers benefit from a vast majority of customers due to variability and making profit from commonalities between the cloud services that they provide. Recently, application configuration dimensions has been increased dramatically due to multi-tenant, multi-device and multi-cloud paradigm. This challenges the configuration and customization of cloud-based software that are typically offered as a service due to the intrinsic variability. In this paper, we present a model-driven approach based on variability models originating from the software product line community to handle such multi-dimensional variability in the cloud. We exploit orthogonal variability models to systematically manage and create tenant-specific configuration and customizations. We also demonstrate how such variability models can be utilized to take into account the already deployed application parts to enable harmonized deployments for new tenants in a multi-cloud setting. The approach considers application functional and non-functional requirements to provide a set of valid multi-cloud configurations. We illustrate our approach through a case study
Towards Self-Protective Multi-Cloud Applications: MUSA – a Holistic Framework to Support the Security-Intelligent Lifecycle Management of Multi-Cloud Applications
The most challenging applications in heterogeneous cloud ecosystems are those that are able to maximise the benefits of the combination of the cloud resources in use: multi-cloud applications. They have to deal with the security of the individual components as well as with the overall application security including the communications and the data flow between the components. In this paper we present a novel approach currently in progress, the MUSA framework. The MUSA framework aims to support the security-intelligent lifecycle management of distributed applications over heterogeneous cloud resources. The framework includes security-by-design mechanisms to allow application self-protection at runtime, as well as methods and tools for the integrated security assurance in both the engineering and operation of multi-cloud applications. The MUSA framework leverages security-by-design, agile and DevOps approaches to enable the security-aware development and operation of multi-cloud applications.European Commission's H202
SOA2Cloud: Un marco de trabajo para la migración de aplicaciones SOA a Cloud siguiendo una aproximación dirigida por modelos
[EN] Software applications are currently considered an element essential and indispensable in all business activities, for example, information exchange and social network. Nevertheless, for their construction and deployment to use all the resources that are available in remote and accessible locations on the network, which leads to inefficient operations in development and deployment, and enormous costs in the acquisition of IT equipment.
The present master thesis aims to contribute to the improvement of the previous context proposing SOA2Cloud, a framework for migration of applications based on SOA to Cloud environments, making use Model-Driven Software Development approach. SOA2Cloud aims to provide mechanisms for the
migration of SOA applications specified through the OMG SoaML standard, incorporating the service level agreements (SLA) to Cloud Computing environments. The framework proposed to makes to use a SOA application model, defined to conform to SoaML metamodel, and a model of service level agreements defined according to SLA generic metamodelo, to generation a model according to Cloud metamodel, through models transformations. This generated model, over again to model transformation, for obtaining the model Azure platform, according to their generic metamodel built for this research work. At the conclusion model transformations, the obtained model over again a model to text transformation to obtain the source code, and thus be tested and deployed in the platform selected for this research Azure work.
This proposal is based on a comprehensive study of the state of the art, made by conducting a systematic mapping, about strategies for migrating applications SOA to Cloud Computing environments. The results contributed in a meaningful way in the definition of the process of migration in the framework.
Finally, an example of application that shows the feasibility of our approach was developed. This example demonstrates in detail as the framework for migrating applications proposed SOA to Cloud environments. The results show that our proposal may allow improving the strategy mainly used by researchers and professionals in the area to perform migrations of SOA applications into Cloud environments. This will be through our proposed migration framework which exploits the benefits of Model-Driven Software Development.[ES] Las aplicaciones software son consideradas actualmente un elemento esencial e indispensable en toda actividad empresarial, por ejemplo, intercambio de información y motor de redes sociales. Sin embargo, para su construcción y despliegue se utilizan todos los recursos que estén disponibles en ubicaciones remotas y accesibles de la red, lo que conlleva a realizar operaciones ineficientes en el desarrollo y despliegue, y enormes gastos en la adquisición de equipos de TI.
La presente tesina de máster pretende contribuir a la mejora del contexto anterior proponiendo SOA2Cloud, un marco de trabajo para la migración de aplicaciones basadas en SOA a entornos Cloud, haciendo uso de la aproximación del Desarrollo de Software Dirigido por Modelos (DSDM). SOA2Cloud tiene la finalidad de proporcionar mecanismos para la migración de aplicaciones SOA especificadas a través del estándar SoaML de la OMG, incorporando los Acuerdos de Nivel de Servicios (SLA) a entornos Cloud Computing. El marco de trabajo propuesto hace uso de un modelo de la aplicación SOA, definido conforme a SoaML, y un modelo de acuerdos de servicios definido conforme a un metamodelo genérico de SLA para la generación de un modelo conforme a un metamodelo para aplicaciones Cloud, a través de transformaciones de modelos. Este modelo generado, es sometido a una nueva transformación de modelos, para la obtención del modelo de la plataforma Azure, conforme a su metamodelo genérico construido para este trabajo de investigación. Una vez concluidas las transformaciones de modelos, el modelo obtenido es sometido a una transformación de modelo a texto para la obtención del código fuente, y de esta forma ser testeado y desplegado en la plataforma seleccionada para este trabajo de investigación Windows Azure.
Esta propuesta se apoya en un amplio estudio del estado del arte, realizado mediante la conducción de un mapeo sistemático, acerca de las estrategias de migración de aplicaciones SOA a entornos Cloud Computing. Los resultados obtenidos aportaron de una forma significativa en la definición del proceso de migración en el marco de trabajo.
Finalmente, se desarrolló un ejemplo de aplicación que muestra la viabilidad de nuestro enfoque. Este ejemplo muestra en detalle como el marco de trabajo para la migración de aplicaciones SOA a entornos Cloud propuesto. Los resultados muestran que nuestra propuesta permitiría mejorar el enfoque de algunos investigadores y profesionales del área al realizar migraciones de aplicaciones SOA a entornos Cloud, haciéndolas a través de este marco de trabajo que aprovecha los beneficios del Desarrollo de Software Dirigido por Modelos.Botto Tobar, MÁ. (2014). SOA2Cloud: Un marco de trabajo para la migración de aplicaciones SOA a Cloud siguiendo una aproximación dirigida por modelos. http://hdl.handle.net/10251/47834Archivo delegad