8 research outputs found

    Transition to SDN is HARMLESS: Hybrid ARchitecture for Migrating Legacy Ethernet Switches to SDN

    Get PDF
    Software-Defined Networking (SDN) offers a new way to operate, manage, and deploy communication networks and to overcome many long-standing problems of legacy networking. However, widespread SDN adoption has not occurred yet due to the lack of a viable incremental deployment path and the relatively immature present state of SDN-capable devices on the market. While continuously evolving software switches may alleviate the operational issues of commercial hardware-based SDN offerings, namely lagging standards-compliance, performance regressions, and poor scaling, they fail to match the cost-efficiency and port density. In this paper, we propose HARMLESS, a new SDN switch design that seamlessly adds SDN capability to legacy network gear, by emulating the OpenFlow switch OS in a separate software switch component. This way, HARMLESS enables a quick and easy leap into SDN, combining the rapid innovation and upgrade cycles of software switches with the port density and cost-efficiency of hardware-based appliances into a fully dataplane-transparent and vendor-neutral solution. HARMLESS incurs an order of magnitude smaller initial expenditure for an SDN deployment than existing turnkey vendor SDN solutions while, at the same time, yields matching, or even better, data plane performance for smaller enterprises

    Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art

    Full text link
    Software-Defined Networking (SDN) is an evolutionary networking paradigm which has been adopted by large network and cloud providers, among which are Tech Giants. However, embracing a new and futuristic paradigm as an alternative to well-established and mature legacy networking paradigm requires a lot of time along with considerable financial resources and technical expertise. Consequently, many enterprises can not afford it. A compromise solution then is a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN functionalities are leveraged while existing traditional network infrastructures are acknowledged. Recently, hSDN has been seen as a viable networking solution for a diverse range of businesses and organizations. Accordingly, the body of literature on hSDN research has improved remarkably. On this account, we present this paper as a comprehensive state-of-the-art survey which expands upon hSDN from many different perspectives

    Virtualized Network Infrastructures: Performance Analysis, Design and Implementation

    Get PDF
    In recent decades, there has been a tremendous evolution in the traffic on the Internet and enterprises networks. Networks assisted since the beginning to two phenomena: on the one hand, the birth of a multitude of applications, each posing different requirements; on the other hand, the explosion of personal mobile networking, with an ever increasing demand of devices that require connectivity. These trends resulted in increased network complexity, leading to difficult management and high costs. At the same time, evolution in the Information Technology (IT) field led to the birth of cloud computing and growth of virtualization technologies, opening new opportunities not only for companies but for individuals (be it PC or mobile users), as well as Service and Infrastructure Providers. Emerging technologies such as Software Defined Networking (SDN) and Network Functions Virtualization (NFV) seems to be promising solutions to today’s network problems. Neither standardized solutions, nor how to properly combine their usage to achieve flexible and proactive control management have been discovered yet. This Ph.D. thesis focuses on the exploration of three plane of functionality in which software-defined (computer) networks can be divided: the data, the control and the management plane. In this thesis, we present insights on several aspects of network virtualization, starting from virtual network performance of cloud computing infrastructures, and introducing the Service Function Chaining (SFC) mechanism, discussing its analysis, design and implementation. In particular, the original contribution of this dissertation concerns (i) performance evaluation of the OpenStack cloud platform (the data plane); (ii) the design and implementation of a stateful SDN controller for dynamic SFC (the control plane); (iii) design, implementation and performance analysis of a proposed Intent-based approach for dynamic SFC (the management plane)

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate

    Communication between nodes for autonomic and distributed management

    Get PDF
    Doutoramento conjunto MAPi em InformáticaOver the last decade, the most widespread approaches for traditional management were based on the Simple Network Management Protocol (SNMP) or Common Management Information Protocol (CMIP). However, they both have several problems in terms of scalability, due to their centralization characteristics. Although the distributed management approaches exhibit better performance in terms of scalability, they still underperform regarding communication costs, autonomy, extensibility, exibility, robustness, and cooperation between network nodes. The cooperation between network nodes normally requires excessive overheads for synchronization and dissemination of management information in the network. For emerging dynamic and large-scale networking environments, as envisioned in Next Generation Networks (NGNs), exponential growth in the number of network devices and mobile communications and application demands is expected. Thus, a high degree of management automation is an important requirement, along with new mechanisms that promote it optimally and e ciently, taking into account the need for high cooperation between the nodes. Current approaches for self and autonomic management allow the network administrator to manage large areas, performing fast reaction and e ciently facing unexpected problems. The management functionalities should be delegated to a self-organized plane operating within the network, that decrease the network complexity and the control information ow, as opposed to centralized or external servers. This Thesis aims to propose and develop a communication framework for distributed network management which integrates a set of mechanisms for initial communication, exchange of management information, network (re) organization and data dissemination, attempting to meet the autonomic and distributed management requirements posed by NGNs. The mechanisms are lightweight and portable, and they can operate in di erent hardware architectures and include all the requirements to maintain the basis for an e cient communication between nodes in order to ensure autonomic network management. Moreover, those mechanisms were explored in diverse network conditions and events, such as device and link errors, di erent tra c/network loads and requirements. The results obtained through simulation and real experimentation show that the proposed mechanisms provide a lower convergence time, smaller overhead impact in the network, faster dissemination of management information, increase stability and quality of the nodes associations, and enable the support for e cient data information delivery in comparison to the base mechanisms analyzed. Finally, all mechanisms for communication between nodes proposed in this Thesis, that support and distribute the management information and network control functionalities, were devised and developed to operate in completely decentralized scenarios.Durante a última década, protocolos como Simple Network Management Protocol (SNMP) ou Common Management Information Protocol (CMIP) foram as abordagens mais comuns para a gestão tradicional de redes. Essas abordagens têm vários problemas em termos de escalabilidade, devido às suas características de centralização. Apresentando um melhor desempenho em termos de escalabilidade, as abordagens de gestão distribuída, por sua vez, são vantajosas nesse sentido, mas também apresentam uma série de desvantagens acerca do custo elevado de comunicação, autonomia, extensibilidade, exibilidade, robustez e cooperação entre os nós da rede. A cooperação entre os nós presentes na rede é normalmente a principal causa de sobrecarga na rede, uma vez que necessita de colectar, sincronizar e disseminar as informações de gestão para todos os nós nela presentes. Em ambientes dinâmicos, como é o caso das redes atuais e futuras, espera-se um crescimento exponencial no número de dispositivos, associado a um grau elevado de mobilidade dos mesmos na rede. Assim, o grau elevado de funções de automatiza ção da gestão da rede é uma exigência primordial, bem como o desenvolvimento de novos mecanismos e técnicas que permitam essa comunicação de forma optimizada e e ciente. Tendo em conta a necessidade de elevada cooperação entre os elementos da rede, as abordagens atuais para a gestão autonómica permitem que o administrador possa gerir grandes áreas de forma rápida e e ciente frente a problemas inesperados, visando diminuir a complexidade da rede e o uxo de informações de controlo nela gerados. Nas gestões autonómicas a delegação de operações da rede é suportada por um plano auto-organizado e não dependente de servidores centralizados ou externos. Com base nos tipos de gestão e desa os acima apresentados, esta Tese tem como principal objetivo propor e desenvolver um conjunto de mecanismos necessários para a criação de uma infra-estrutura de comunicação entre nós, na tentativa de satisfazer as exigências da gestão auton ómica e distribuída apresentadas pelas redes de futura geração. Nesse sentido, mecanismos especí cos incluindo inicialização e descoberta dos elementos da rede, troca de informação de gestão, (re) organização da rede e disseminação de dados foram elaborados e explorados em diversas condições e eventos, tais como: falhas de ligação, diferentes cargas de tráfego e exigências de rede. Para além disso, os mecanismos desenvolvidos são leves e portáveis, ou seja, podem operar em diferentes arquitecturas de hardware e contemplam todos os requisitos necessários para manter a base de comunicação e ciente entre os elementos da rede. Os resultados obtidos através de simulações e experiências reais comprovam que os mecanismos propostos apresentam um tempo de convergência menor para descoberta e troca de informação, um menor impacto na sobrecarga da rede, disseminação mais rápida da informação de gestão, aumento da estabilidade e a qualidade das ligações entre os nós e entrega e ciente de informações de dados em comparação com os mecanismos base analisados. Finalmente, todos os mecanismos desenvolvidos que fazem parte da infrastrutura de comunicação proposta foram concebidos e desenvolvidos para operar em cenários completamente descentralizados

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate

    Squeezing the most benefit from network parallelism in datacenters

    Get PDF
    One big non-blocking switch is one of the most powerful and pervasive abstractions in datacenter networking. As Moore's law begins to wane, using parallelism to scale out processing units, vs. scale them up, is becoming exceedingly popular. The one-big-switch abstraction, for example, is typically implemented via leveraging massive degrees of parallelism behind the scene. In particular, in today's datacenters that exhibit a high degree of multi-pathing, each logical path between a communicating pair in the one-big-switch abstraction is mapped to a set of paths that can carry traffic in parallel. Similarly, each one-big-switch abstraction function, such as the firewall functionality, is mapped to a set of distributed hardware and software switches. Efficiently deploying this pool of networking connectivity and preserving the functional correctness of network functions, in spite of the parallelism, are challenging. Efficiently balancing the load among multiple paths is challenging because microbursts, responsible for the majority of packet loss in datacenters today, usually last for only a few microseconds. Even the fastest traffic engineering schemes today have control loops that are several orders of magnitude slower (a few milliseconds to a few seconds), and are therefore ineffective in controlling microbursts. Correctly implementing network functions in the face of parallelism is hard because the distributed set of elements that in parallel implement a one-big-switch abstraction can inevitably have inconsistent states that may cause them to behave differently than one physical switch. The first part of this thesis presents DRILL, a datacenter fabric for Clos networks which performs micro load balancing to distribute load as evenly as possible on microsecond timescales. To achieve this, DRILL employs packet-level decisions at each switch based on local queue occupancies and randomized algorithms to distribute load. Despite making per-packet forwarding decisions, by enforcing a tight control on queue occupancies, DRILL manages to keep the degree of packet reordering low. DRILL adapts to topological asymmetry (e.g. failures) in Clos networks by decomposing the network into symmetric components. Using a detailed switch hardware model, we simulate DRILL and show it outperforms recent edge-based load balancers particularly in the tail latency under heavy load, e.g., under 80% load, it reduces the 99.99th percentile of flow completion times of Presto and CONGA by 32% and 35%, respectively. Finally, we analyze DRILL's stability and throughput-efficiency. In the second part, we focus on the correctness of one-big-switch abstraction's implementation. We first show that naively using parallelism to scale networking elements can cause incorrect behavior. For example, we show that an IDS system which operates correctly as a single network element can erroneously and permanently block hosts when it is replicated. We then provide a system, COCONUT, for seamless scale-out of network forwarding elements; that is, an SDN application programmer can program to what functionally appears to be a single forwarding element, but which may be replicated behind the scenes. To do this, we identify the key property for seamless scale out, weak causality, and guarantee it through a practical and scalable implementation of vector clocks in the data plane. We build a prototype of COCONUT and experimentally demonstrate its correct behavior. We also show that its abstraction enables a more efficient implementation of seamless scale-out compared to a naive baseline. Finally, reasoning about network behavior requires a new model that enables us to distinguish between observable and unobservable events. So in the last part, we present the Input/Output Automaton (IOA) model and formalize networks' behaviors. Using this framework, we prove that COCONUT enables seamless scale out of networking elements, i.e., the user-perceived behavior of any COCONUT element implemented with a distributed set of concurrent replicas is provably indistinguishable from its singleton implementation

    On the Edge of Secure Connectivity via Software-Defined Networking

    Get PDF
    Securing communication in computer networks has been an essential feature ever since the Internet, as we know it today, was started. One of the best known and most common methods for secure communication is to use a Virtual Private Network (VPN) solution, mainly operating with an IP security (IPsec) protocol suite originally published in 1995 (RFC1825). It is clear that the Internet, and networks in general, have changed dramatically since then. In particular, the onset of the Cloud and the Internet-of-Things (IoT) have placed new demands on secure networking. Even though the IPsec suite has been updated over the years, it is starting to reach the limits of its capabilities in its present form. Recent advances in networking have thrown up Software-Defined Networking (SDN), which decouples the control and data planes, and thus centralizes the network control. SDN provides arbitrary network topologies and elastic packet forwarding that have enabled useful innovations at the network level. This thesis studies SDN-powered VPN networking and explains the benefits of this combination. Even though the main context is the Cloud, the approaches described here are also valid for non-Cloud operation and are thus suitable for a variety of other use cases for both SMEs and large corporations. In addition to IPsec, open source TLS-based VPN (e.g. OpenVPN) solutions are often used to establish secure tunnels. Research shows that a full-mesh VPN network between multiple sites can be provided using OpenVPN and it can be utilized by SDN to create a seamless, resilient layer-2 overlay for multiple purposes, including the Cloud. However, such a VPN tunnel suffers from resiliency problems and cannot meet the increasing availability requirements. The network setup proposed here is similar to Software-Defined WAN (SD-WAN) solutions and is extremely useful for applications with strict requirements for resiliency and security, even if best-effort ISP is used. IPsec is still preferred over OpenVPN for some use cases, especially by smaller enterprises. Therefore, this research also examines the possibilities for high availability, load balancing, and faster operational speeds for IPsec. We present a novel approach involving the separation of the Internet Key Exchange (IKE) and the Encapsulation Security Payload (ESP) in SDN fashion to operate from separate devices. This allows central management for the IKE while several separate ESP devices can concentrate on the heavy processing. Initially, our research relied on software solutions for ESP processing. Despite the ingenuity of the architectural concept, and although it provided high availability and good load balancing, there was no anti-replay protection. Since anti-replay protection is vital for secure communication, another approach was required. It thus became clear that the ideal solution for such large IPsec tunneling would be to have a pool of fast ESP devices, but to confine the IKE operation to a single centralized device. This would obviate the need for load balancing but still allow high availability via the device pool. The focus of this research thus turned to the study of pure hardware solutions on an FPGA, and their feasibility and production readiness for application in the Cloud context. Our research shows that FPGA works fluently in an SDN network as a standalone IPsec accelerator for ESP packets. The proposed architecture has 10 Gbps throughput, yet the latency is less than 10 µs, meaning that this architecture is especially efficient for data center use and offers increased performance and latency requirements. The high demands of the network packet processing can be met using several different approaches, so this approach is not just limited to the topics presented in this thesis. Global network traffic is growing all the time, so the development of more efficient methods and devices is inevitable. The increasing number of IoT devices will result in a lot of network traffic utilising the Cloud infrastructures in the near future. Based on the latest research, once SDN and hardware acceleration have become fully integrated into the Cloud, the future for secure networking looks promising. SDN technology will open up a wide range of new possibilities for data forwarding, while hardware acceleration will satisfy the increased performance requirements. Although it still remains to be seen whether SDN can answer all the requirements for performance, high availability and resiliency, this thesis shows that it is a very competent technology, even though we have explored only a minor fraction of its capabilities
    corecore