762 research outputs found

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Orchestration of IT/Cloud and Networks: From Inter-DC Interconnection to SDN/NFV 5G Services

    Get PDF
    The so-called 5G networks promise to be the foundations for the deployment of advanced services, conceived around the joint allocation and use of heterogeneous resources,including network, computing and storage. Resources are placed on remote locations constrained by the different service requirements, resulting in cloud infrastructures (as pool of resources) that need to be interconnected. The automation of the provisioning of such services relies on a generalized orchestra tion, defined as to the coherent coordination of heterogeneous systems, applied to common cases such as involving heterogeneous network domains in terms of control or data plane technologies, or cloud and network resources. Although cloud-computing platforms do take into account the need to interconnect remote virtual machine instances, mostly rely on managing L2 overlays over L3 (IP). The integration with transport networks is still not fully achieved, including leveraging the advances in software defined networks and transmission. We start with an overview of network orchestration, considering different models; we extend them to take into account cloud manage ment while mentioning relevant existing initiatives and conclude with the NFV architecture

    Introducing Development Features for Virtualized Network Services

    Get PDF
    Network virtualization and softwarizing network functions are trends aiming at higher network efficiency, cost reduction and agility. They are driven by the evolution in Software Defined Networking (SDN) and Network Function Virtualization (NFV). This shows that software will play an increasingly important role within telecommunication services, which were previously dominated by hardware appliances. Service providers can benefit from this, as it enables faster introduction of new telecom services, combined with an agile set of possibilities to optimize and fine-tune their operations. However, the provided telecom services can only evolve if the adequate software tools are available. In this article, we explain how the development, deployment and maintenance of such an SDN/NFV-based telecom service puts specific requirements on the platform providing it. A Software Development Kit (SDK) is introduced, allowing service providers to adequately design, test and evaluate services before they are deployed in production and also update them during their lifetime. This continuous cycle between development and operations, a concept known as DevOps, is a well known strategy in software development. To extend its context further to SDN/NFV-based services, the functionalities provided by traditional cloud platforms are not yet sufficient. By giving an overview of the currently available tools and their limitations, the gaps in DevOps for SDN/NFV services are highlighted. The benefit of such an SDK is illustrated by a secure content delivery network service (enhanced with deep packet inspection and elastic routing capabilities). With this use-case, the dynamics between developing and deploying a service are further illustrated

    A network QoS management architecture for virtualization environments

    Get PDF
    Network quality of service (QoS) and its management are concerned with providing, guaranteeing and reporting properties of data flows within computer networks. For the past two decades, virtualization has been becoming a very popular tool in data centres, yet, without network QoS management capabilities. With virtualization, the management focus shifts from physical components and topologies, towards virtual infrastructures (VI) and their purposes. VIs are designed and managed as independent isolated entities. Without network QoS management capabilities, VIs cannot offer the same services and service levels as physical infrastructures can, leaving VIs at a disadvantage with respect to applicability and efficiency. This thesis closes this gap and develops a management architecture, enabling network QoS management in virtulization environments. First, requirements are dervied, based on real world scenarios, yielding a validation reference for the proposed architecture. After that, a life cycle for VIs and a taxonomy for network links and virtual components are introduced, to arrange the network QoS management task with the general management of virtualization environments and enabling the creation of technology specific adaptors for integrating the technologies and sub-services used in virtualization environments. The core aspect, shaping the proposed management architecture, is a management loop and its corresponding strategy for identifying and ordering sub-tasks. Finally, a prototypical implementation showcases that the presented management approach is suited for network QoS management and enforcement in virtualization environments. The architecture fulfils its purpose, fulfilling all identified requirements. Ultimately, network QoS management is one amongst many aspects to management in virtualization environments and the herin presented architecture shows interfaces to other management areas, where integration is left as future work.Verwaltungsaufgaben für Netzdienstgüte umfassen das Bereitstellen, Sichern und Berichten von Flusseigenschaften in Rechnernetzen. Während der letzen zwei Jahrzehnte entwickelte sich Virtualisierung zu einer Schlüsseltechnologie für Rechenzentren, bisher ohne Möglichkeiten zum Verwalten der Netzdienstgüte. Der Einsatz von Virtualisierung verschiebt den Fokus beim Betrieb von Rechenzentren weg von physischen Komponenten und Netzen, hin zu virtuellen Infrastrukturen (VI) und ihren Einsatzzwecken. VIs werden als unabhängige, voneinander isolierte Einheiten entwickelt und verwaltet. Ohne Netzdienstgüte, sind VIs nicht so vielseitig und effizient einsetzbar wie physische Aufbauten. Diese Arbeit schließt diese Lücke mit der Entwicklung einer Managementarchitektur zur Verwaltung der Netzdienstgüte in Virtualisierungsumgebungen. Zunächst werden Anforderungen aus realen Szenarios abgeleitet, mit denen Architekturen bewertet werden können. Zur Abgrenzung der speziellen Aufgabe Netzdienstgüteverwaltung innerhalb des allgemeinen Managementproblems, wird anschließend ein Lebenszyklusmodell für VIs vorgestellt. Die Entwicklung einer Taxonomie für Kopplungen und Komponenten ermöglicht technologiespezifische Adaptoren zur Integration von in Virtualisierungsumgebungen eingesetzten Technologien. Kerngedanke hinter der entwickelten Architektur ist eine Rückkopplungsschleife und ihre einhergehende Methode zur Strukturierung und Anordnung von Teilproblemen. Abschließend zeigt eine prototypische Implementierung, dass dieser Ansatz für Verwaltung und Durchsetzung von Netzdienstgüte in Virtualisierungsumgebungen geeignet ist. Die Architektur kann ihren Zweck sowie die gestellten Anforderungen erfüllen. Schlussendlich ist Netzdienstgüte ein Bereich von vielen beim Betrieb von Virtualisierungsumgebungen. Die Architektur zeigt Schnittstellen zu anderen Bereichen auf, deren Integration zukünftigen Arbeiten überlassen bleibt

    Achieving Adaptation Through Live Virtual Machine Migration in Two-Tier Clouds

    Get PDF
    This thesis presents a model-driven approach for application deployment and management in two-tier heterogeneous cloud environments. For application deployment, we introduce the architecture, the services and the domain specific language that abstract common features of multi-cloud deployments. By leveraging the architecture and the language, application deployers author a deployment model that captures the high-level structure of the application. The deployment model is then translated into deployment workflows on specific clouds. As a use case, we introduce a live VM migration framework that maintains the application quality of services through VM migrations across two tier-clouds. The proposed framework can monitor the performance of the applications and their underlying infrastructure and plan and executes VM migrations to eliminate hotspots in a datacenter. We evaluate both the application deployment architecture and the live migration on public clouds

    Paving the path towards platform engineering using a comprehensive reference model

    Get PDF
    Amidst the growing popularity of platform engineering, promising improved productivity and enhanced developer experience through an internal developer platform (IDP), this research addresses the prevalent challenge of a lack of a shared understanding in the field and the complications in defining effective, customized strategies. Introducing a definitive Platform Engineering Reference Model (PE-RM) based on the Open Distributed Processing reference model (ODP-RM) framework to provide a common under- standing. This model offers a structured framework for software organizations to create tailored platform engineering strategies and realize the full potential of platform engineering. The reference model is val- idated by conducting a case study in which a contextual design and technical implementation guided by the reference model is proposed. The case study offers guidance in designing platform engineering in the context of a software organization. Furthermore, it showcases how to construct a technical platform engineering implementation, which includes experiments exposing the productivity improvements and applicability of the implementation. By facilitating a shared vocabulary and providing a roadmap for implementation, this research aims to mitigate prevailing complexities and accelerate the adoption and effectiveness of platform engineering across organizations

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    Next Generation Cloud Computing: New Trends and Research Directions

    Get PDF
    The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201

    Towards Model-Driven Provisioning, Deployment, Monitoring, and Adaptation of Multi-cloud Systems

    Full text link

    Unified Management of Applications on Heterogeneous Clouds

    Get PDF
    La diversidad con la que los proveedores cloud ofrecen sus servicios, definiendo sus propias interfaces y acuerdos de calidad y de uso, dificulta la portabilidad y la interoperabilidad entre proveedores, lo que incurre en el problema conocido como el bloqueo del vendedor. Dada la heterogeneidad que existe entre los distintos niveles de abstracción del cloud, como IaaS y PaaS, hace que desarrollar aplicaciones agnósticas que sean independientes de los proveedores y los servicios en los que se van a desplegar sea aún un desafío. Esto también limita la posibilidad de migrar los componentes de aplicaciones cloud en ejecución a nuevos proveedores. Esta falta de homogeneidad también dificulta el desarrollo de procesos para operar las aplicaciones que sean robustos ante los errores que pueden ocurrir en los distintos proveedores y niveles de abstracción. Como resultado, las aplicaciones pueden quedar ligadas a los proveedores para las que fueron diseñadas, limitando la capacidad de los desarrolladores para reaccionar ante cambios en los proveedores o en las propias aplicaciones. En esta tesis se define trans-cloud como una nueva dimensión que unifica la gestión de distintos proveedores y niveles de servicios, IaaS y PaaS, bajo una misma API y hace uso del estándar TOSCA para describir aplicaciones agnósticas y portables, teniendo procesos automatizados, por ejemplo para el despliegue. Por otro lado, haciendo uso de las topologías estructuradas de TOSCA, trans-cloud propone un algoritmo genérico para la migración de componentes de aplicaciones en ejecución. Además, trans-cloud unifica la gestión de los errores, permitiendo tener procesos robustos y agnósticos para gestionar el ciclo de vida de las aplicaciones, independientemente de los proveedores y niveles de servicio donde se estén ejecutando. Por último, se presentan los casos de uso y los resultados de los experimentos usados para validar cada una de estas propuestas
    corecore