55 research outputs found

    A proposal for secured, efficient and scalable layer 2 network virtualisation mechanism

    Get PDF
    El contenidos de los capítulos 3 y 4 está sujeto a confidencialidad. 291 p.La Internet del Futuro ha emergido como un esfuerzo investigador para superar estas limitaciones identificadas en la actual Internet. Para ello es necesario investigar en arquitecturas y soluciones novedosas (evolutivas o rompedoras), y las plataformas de experimentación surgen para proporcionar un entorno realista para validar estas nuevas propuestas a gran escala.Debido a la necesidad de compartir la misma infraestructura y recursos para testear simultáneamente diversas propuestas de red, la virtualización de red es la clave del éxito. Se propone una nueva taxonomía para poder analizar y comparar las diferentes propuestas. Se identifican tres tipos: el Nodo Virtual (vNode), la Virtualización posibilitada por SDN (SDNeV) y el overlay.Además, se presentan las plataformas experimentales más relevantes, con un foco especial en la forma en la que cada una de ellas permite la investigación en propuestas de red, las cuales no cumplen todos estos requisitos impuestos: aislamiento, seguridad, flexibilidad, escalabilidad, estabilidad, transparencia, soporte para la investigación en propuestas de red. Por lo tanto, una nueva plataforma de experimentación ortogonal a la experimentación es necesaria.Las principales contribuciones de esta tesis, sustentadas sobre tecnología SDN y NFV, son también los elementos clave para construir la plataforma de experimentación: la Virtualización de Red basada en Prefijos de Nivel 2 (Layer 2 Prefix-based Network Virtualisation, L2PNV), un Protocolo para la Configuración de Direcciones MAC (MAC Address Configuration Protocol, MACP), y un sistema de Control de Acceso a Red basado en Flujos (Flow-based Network Access Control, FlowNAC).Como resultado, se ha desplegado en la Universidad del Pais Vasco (UPV/EHU) una nueva plataforma experimental, la Plataforma Activada por OpenFlow de EHU (EHU OpenFlow Enabled Facility, EHU-OEF), para experimentar y validar estas propuestas realizadas

    A service broker for Intercloud computing

    Get PDF
    This thesis aims at assisting users in finding the most suitable Cloud resources taking into account their functional and non-functional SLA requirements. A key feature of the work is a Cloud service broker acting as mediator between consumers and Clouds. The research involves the implementation and evaluation of two SLA-aware match-making algorithms by use of a simulation environment. The work investigates also the optimal deployment of Multi-Cloud workflows on Intercloud environments

    An innovative approach to performance metrics calculus in cloud computing environments: a guest-to-host oriented perspective

    Get PDF
    In virtualized systems, the task of profiling and resource monitoring is not straight-forward. Many datacenters perform CPU overcommittment using hypervisors, running multiple virtual machines on a single computer where the total number of virtual CPUs exceeds the total number of physical CPUs available. From a customer point of view, it could be indeed interesting to know if the purchased service levels are effectively respected by the cloud provider. The innovative approach to performance profiling described in this work is based on the use of virtual performance counters, only recently made available by some hypervisors to their virtual machines, to implement guest-wide profiling. Although it isn't possible for the virtual machine to access Virtual Machine Monitor, with this method it is able to gather interesting informations to deduce the state of resource overcommittment of the virtualization host where it is executed. Tests have been carried out inside the compute nodes of FIWARE Genoa Node, an instance of a widely distributed federated community cloud, based on OpenStack and KVM. AgiLab-DITEN, the laboratory I belonged to and where I conducted my studies, together with TnT-Lab\u2013DITEN and CNIT-GE-Unit designed, installed and configured the whole Genoa Node, that was hosted on DITEN-UniGE equipment rooms. All the software measuring instruments, operating systems and programs used in this research are publicly available and free, and can be easily installed in a micro instance of virtual machine, rapidly deployable also in public clouds

    A proposal for secured, efficient and scalable layer 2 network virtualisation mechanism

    Get PDF
    El contenidos de los capítulos 3 y 4 está sujeto a confidencialidad. 291 p.La Internet del Futuro ha emergido como un esfuerzo investigador para superar estas limitaciones identificadas en la actual Internet. Para ello es necesario investigar en arquitecturas y soluciones novedosas (evolutivas o rompedoras), y las plataformas de experimentación surgen para proporcionar un entorno realista para validar estas nuevas propuestas a gran escala.Debido a la necesidad de compartir la misma infraestructura y recursos para testear simultáneamente diversas propuestas de red, la virtualización de red es la clave del éxito. Se propone una nueva taxonomía para poder analizar y comparar las diferentes propuestas. Se identifican tres tipos: el Nodo Virtual (vNode), la Virtualización posibilitada por SDN (SDNeV) y el overlay.Además, se presentan las plataformas experimentales más relevantes, con un foco especial en la forma en la que cada una de ellas permite la investigación en propuestas de red, las cuales no cumplen todos estos requisitos impuestos: aislamiento, seguridad, flexibilidad, escalabilidad, estabilidad, transparencia, soporte para la investigación en propuestas de red. Por lo tanto, una nueva plataforma de experimentación ortogonal a la experimentación es necesaria.Las principales contribuciones de esta tesis, sustentadas sobre tecnología SDN y NFV, son también los elementos clave para construir la plataforma de experimentación: la Virtualización de Red basada en Prefijos de Nivel 2 (Layer 2 Prefix-based Network Virtualisation, L2PNV), un Protocolo para la Configuración de Direcciones MAC (MAC Address Configuration Protocol, MACP), y un sistema de Control de Acceso a Red basado en Flujos (Flow-based Network Access Control, FlowNAC).Como resultado, se ha desplegado en la Universidad del Pais Vasco (UPV/EHU) una nueva plataforma experimental, la Plataforma Activada por OpenFlow de EHU (EHU OpenFlow Enabled Facility, EHU-OEF), para experimentar y validar estas propuestas realizadas

    Architectural approaches to a science network software-defined exchange

    Get PDF
    To interconnect research facilities across wide geographic areas, network operators deploy science networks, also referred to as Research and Education (R&E) networks. These networks allow experimenters to establish dedicated circuits between research facilities for transferring large amounts of data, by using advanced reservation systems. Intercontinental dedicated circuits typically require coordination between multiple administrative domains, which need to reach an agreement on a suitable advance reservation. To enhance provisioning capabilities of multi-domain advance reservations, we propose an architecture for end-to-end service orchestration in multi-domain science networks that leverages software-defined networking (SDN) and software-defined exchanges (SDX) for providing multi-path, multi-domain advance reservations. Our simulations show our orchestration architecture increases the reservation success rate. We evaluate our solution using GridFTP, one of the most popular tools for data transfers in the scientific community. Additionally, we propose an interface that domain scientists can use to request science network services from our orchestration framework. Furthermore, we propose a federated auditing framework (FAS) that allows an SDX to verify whether the configurations requested by a user are correctly enforced by participating SDN domains, whether the configurations requested are correctly removed after their expiration time, and whether configurations exist that are performing non-requested actions. We also propose an architecture for advance reservation access control using SDN and tokens.Ph.D

    Adaptation-Aware Architecture Modeling and Analysis of Energy Efficiency for Software Systems

    Get PDF
    This thesis presents an approach for the design time analysis of energy efficiency for static and self-adaptive software systems. The quality characteristics of a software system, such as performance and operating costs, strongly depend upon its architecture. Software architecture is a high-level view on software artifacts that reflects essential quality characteristics of a system under design. Design decisions made on an architectural level have a decisive impact on the quality of a system. Revising architectural design decisions late into development requires significant effort. Architectural analyses allow software architects to reason about the impact of design decisions on quality, based on an architectural description of the system. An essential quality goal is the reduction of cost while maintaining other quality goals. Power consumption accounts for a significant part of the Total Cost of Ownership (TCO) of data centers. In 2010, data centers contributed 1.3% of the world-wide power consumption. However, reasoning on the energy efficiency of software systems is excluded from the systematic analysis of software architectures at design time. Energy efficiency can only be evaluated once the system is deployed and operational. One approach to reduce power consumption or cost is the introduction of self-adaptivity to a software system. Self-adaptive software systems execute adaptations to provision costly resources dependent on user load. The execution of reconfigurations can increase energy efficiency and reduce cost. If performed improperly, however, the additional resources required to execute a reconfiguration may exceed their positive effect. Existing architecture-level energy analysis approaches offer limited accuracy or only consider a limited set of system features, e.g., the used communication style. Predictive approaches from the embedded systems and Cloud Computing domain operate on an abstraction that is not suited for architectural analysis. The execution of adaptations can consume additional resources. The additional consumption can reduce performance and energy efficiency. Design time quality analyses for self-adaptive software systems ignore this transient effect of adaptations. This thesis makes the following contributions to enable the systematic consideration of energy efficiency in the architectural design of self-adaptive software systems: First, it presents a modeling language that captures power consumption characteristics on an architectural abstraction level. Second, it introduces an energy efficiency analysis approach that uses instances of our power consumption modeling language in combination with existing performance analyses for architecture models. The developed analysis supports reasoning on energy efficiency for static and self-adaptive software systems. Third, to ease the specification of power consumption characteristics, we provide a method for extracting power models for server environments. The method encompasses an automated profiling of servers based on a set of restrictions defined by the user. A model training framework extracts a set of power models specified in our modeling language from the resulting profile. The method ranks the trained power models based on their predicted accuracy. Lastly, this thesis introduces a systematic modeling and analysis approach for considering transient effects in design time quality analyses. The approach explicitly models inter-dependencies between reconfigurations, performance and power consumption. We provide a formalization of the execution semantics of the model. Additionally, we discuss how our approach can be integrated with existing quality analyses of self-adaptive software systems. We validated the accuracy, applicability, and appropriateness of our approach in a variety of case studies. The first two case studies investigated the accuracy and appropriateness of our modeling and analysis approach. The first study evaluated the impact of design decisions on the energy efficiency of a media hosting application. The energy consumption predictions achieved an absolute error lower than 5.5% across different user loads. Our approach predicted the relative impact of the design decision on energy efficiency with an error of less than 18.94%. The second case study used two variants of the Spring-based community case study system PetClinic. The case study complements the accuracy and appropriateness evaluation of our modeling and analysis approach. We were able to predict the energy consumption of both variants with an absolute error of no more than 2.38%. In contrast to the first case study, we derived all models automatically, using our power model extraction framework, as well as an extraction framework for performance models. The third case study applied our model-based prediction to evaluate the effect of different self-adaptation algorithms on energy efficiency. It involved scientific workloads executed in a virtualized environment. Our approach predicted the energy consumption with an error below 7.1%, even though we used coarse grained measurement data of low accuracy to train the input models. The fourth case study evaluated the appropriateness and accuracy of the automated model extraction method using a set of Big Data and enterprise workloads. Our method produced power models with prediction errors below 5.9%. A secondary study evaluated the accuracy of extracted power models for different Virtual Machine (VM) migration scenarios. The results of the fifth case study showed that our approach for modeling transient effects improved the prediction accuracy for a horizontally scaling application. Leveraging the improved accuracy, we were able to identify design deficiencies of the application that otherwise would have remained unnoticed

    Towards effective live cloud migration on public cloud IaaS.

    Get PDF
    Cloud computing allows users to access shared, online computing resources. However, providers often offer their own proprietary applications, APIs and infrastructures, resulting in a heterogeneous cloud environment. This environment makes it difficult for users to change cloud service providers and to explore capabilities to support the automated migration from one provider to another. Many standards bodies (IEEE, NIST, DMTF and SNIA), industry (middleware) and academia have been pursuing standards and approaches to reduce the impact of vendor lock-in. Cloud providers offer their Infrastructure as a Service (IaaS) based on virtualization to enable multi-tenant and isolated environments for users. Because, each provider has its own proprietary virtual machine (VM) manager, called the hypervisor, VMs are usually tightly coupled to the underlying hardware, thus hindering live migration of VMs to different providers. A number of user-centric approaches have been proposed from both academia and industry to solve this coupling issue. However, these approaches suffer limitations in terms of flexibility (decoupling VMs from underlying hardware), performance (migration downtime) and security (secure live migration). These limitations are identified using our live cloud migration criteria which are rep- resented by flexibility, performance and security. These criteria are not only used to point out the gap in the previous approaches, but are also used to design our live cloud migration approach, LivCloud. This approach aims to live migration of VMs across various cloud IaaS with minimal migration downtime, with no extra cost and without user’s intervention and awareness. This aim has been achieved by addressing different gaps identified in the three criteria: the flexibility gap is improved by considering a better virtualization platform to support a wider hardware range, supporting various operating system and taking into account the migrated VMs’ hardware specifications and layout; the performance gap is enhanced by improving the network connectivity, providing extra resources required by the migrated VMs during the migration and predicting any potential failure to roll back the system to its initial state if required; finally, the security gap is clearly tackled by protecting the migration channel using encryption and authentication. This thesis presents: (i) A clear identification of the key challenges and factors to successfully perform live migration of VMs across different cloud IaaS. This has resulted in a rigorous comparative analysis of the literature on live migration of VMs at the cloud IaaS based on our live cloud migration criteria; (ii) A rigorous analysis to distil the limitations of existing live cloud migration approaches and how to design efficient live cloud migration using up-to-date technologies. This has led to design a novel live cloud migration approach, called LivCloud, that overcomes key limitations in currently available approaches, is designed into two stages, the basic design stage and the enhancement of the basic design stage; (iii) A systematic approach to assess LivCloud on different public cloud IaaS. This has been achieved by using a combination of up-to-date technologies to build LivCloud taking the interoperability challenge into account, implementing and discussing the results of the basic design stage on Amazon IaaS, and implementing both stages of the approach on Packet bare metal cloud. To sum up, the thesis introduces a live cloud migration approach that is systematically designed and evaluated on uncontrolled environments, Amazon and Packet bare metal. In contrast to other approaches, it clearly highlights how to perform and secure the migration between our local network and the mentioned environments
    corecore