8 research outputs found

    Scalable topological forwarding and routing policies in RINA-enabled programmable data centers

    Get PDF
    This is the peer reviewed version of the following article: Leon Gaixas S, Perelló J, Careglio D, Grasa E, López DR, Aranda PA. Scalable topological forwarding and routing policies in RINA-enabled programmable data centers. Trans Emerging Tel Tech. 2017;28:e3256, DOI 10.1002/ett.3256, which has been published in final form at DOI: 10.1002/ett.3256. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-ArchivingGiven the current expansion of cloud computing, the expected advent of the Internet of Things, and the requirements of future fifth-generation network infrastructures, significantly larger pools of computational and storage resources will soon be required. This emphasizes the need for more scalable data centers that are capable of providing such an amount of resources in a cost-effective way. A quick look into today's commercial data centers shows that they tend to rely on variations of well-defined leaf-spine/Clos data center network (DCN) topologies, offering low latency, ultrahigh bisectional bandwidth, and enhanced reliability against concurrent failures. However, DCNs are typically restricted by the use of the Transmission Control Protocol/Internet Protocol (TCP/IP) suite, thus suffering limited routing scalability. In this work, we study the benefits that replacing TCP/IP with the recursive internetwork architecture (RINA) can bring into commercial DCNs, focusing on forwarding and routing scalability. We quantitatively evaluate the benefits that RINA solutions can yield against those based on TCP/IP and highlight how, by deploying RINA, topological routing solutions can improve even more the efficiency of the network. To this goal, we propose a rule-and-exception forwarding policy tailored to the characteristics of several DCN variants, enabling fast forwarding decisions with merely neighbors' information. Upon failures, few exceptions are necessary, whose computation can also profit from the known topology. Extensive numerical results show that the proposed policy requirements depend mainly on the number of neighbors and concurrent failures in the DCN rather than its size, dramatically reducing the amount of forwarding and routing information stored at DCN nodes.Peer ReviewedPostprint (author's final draft

    ARCFIRE : experimentation with the recursive InterNetwork Architecture

    Get PDF
    European funded research into the Recursive Inter-Network Architecture (RINA) started with IRATI, which developed an initial prototype implementation for OS/Linux. IRATI was quickly succeeded by the PRISTINE project, which developed different policies, each tailored to specific use cases. Both projects were development-driven, where most experimentation was limited to unit testing and smaller scale integration testing. In order to assess the viability of RINA as an alternative to current network technologies, larger scale experimental deployments are needed. The opportunity arose for a project that shifted focus from development towards experimentation, leveraging Europe's investment in Future Internet Research and Experimentation (FIRE+) infrastructures. The ARCFIRE project took this next step, developing a user-friendly framework for automating RINA experiments. This paper reports and discusses the implications of the experimental results achieved by the ARCFIRE project, using open source RINA implementations deployed on FIRE+ Testbeds. Experiments analyze the properties of RINA relevant to fast network recovery, network renumbering, Quality of Service, distributed mobility management, and network management. Results highlight RINA properties that can greatly simplify the deployment and management of real-world networks; hence, the next steps should be focused on addressing very specific use cases with complete network RINA-based networking solutions that can be transferred to the market

    Contributions to routing scalability and QoS assurance in cloud data transport networks based on the recursive internetwork architecture

    Get PDF
    With an increasing number of devices and heterogeneous distributed applications, it is becoming evident that service delivered by the current Internet fall short to supply the actual Quality of Service (QoS) requirements of applications. In addition, the global scope of the IP layer causes large scalability problems on the network. Multiple solutions aim to overcome the limitations of the model (BGP, NAT, etc.), but all end being constrained by the same networking model that they try to improve, ending simply breaking and patching the stack itself of TCP/IP. In contrast, RINA proposes a new clean-slate Internet architecture based on a recursive networking stack with focus on inter process communication, where each layer, or DIF, performs the same set of tasks. DIFs are fully configurable by mean of programmable policies, and provide complete support for QoS services. RINA is capable to provide a standardized way to express the capabilities of each layer, the QoS Cubes. With those, RINA allows for applications and upper processes to express their requirements in terms of latency, losses, etc. The contributions in this thesis take profit from the recursive stack of RINA and the use of policies to propose and analyse old and new solutions which would not be compatible with the current TCP/IP Internet. Improving the QoS services, this work takes profit from the information on requirements provided by the applications themselves to improve the assurance of QoS. With the use of Q-based scheduling policies, improved QoS assurances are provided, aiming to provide “good enough” service for all flows in the network, resulting in a more appropriate sharing of resources. These policies have been tested in backbone-like networks, showing interesting improvements with respect to commonly used solutions like MPLS-based VPNs. In addition the provisioning of QoS services to end-users is also considered. In order to allow that, it is required to impose some limits on what end-users can send to the network, limiting the amount of priority traffic that potentially greedy users can send. In that regard, while enforcing strict rate-limits per QoS would be trivial in RINA, a new △Q-based rate-limiting policy that aims to limit the amount of priority traffic in a more user-friendly way is also explored. In terms of scalability, this work also considers different measures to improve forwarding and routing within large-scale networks. As for the use of policies that could profit from specific network topologies, a new forwarding policy, that mix both topological rules and exceptions, is proposed. With this policy, forwarding table lookups in large tables are replaced with fast and simple forwarding rules based on the location of nodes and their neighbourhood. Given the common topologies used in large data centres, the proposed policy is found to be a perfect match for those scenarios. Test for different data centre topology showed clear improvements, requiring only a small fraction of all forwarding information despite the large size of such networks, depending that in the number of concurrent failures in the network rather than on the size of it. In addition, this work also considers the use of topological routing policies to populate exceptions upon failures. The use of topological routing solutions resulted in reduced complexity for computing paths and less routing messages. In addition to topological solutions, the use other routing solution, not well suited for the IP environment are also investigated. Specifically, it is shown how a Landmark routing solution could be implemented within RINA. Finally, efforts are also devoted to analyse the importance of path selection for ensuring QoS requirements and how it is not required to reach extremes solutions, like the use of connections, to provide the required services.Con un número cada vez mayor de dispositivos y aplicaciones distribuidas, se está volviendo evidente que el servicio best-effort ofrecido por la actual Internet TCP/IP no satisface los requisitos de calidad de servicio (QoS) de las aplicaciones. No solo eso, sino que el alcance global de la capa de IP se convierte en la causa de grandes problemas de escalabilidad, requiriendo costes cada vez más altos para ser resueltos. Desde la implantación de TCP/IP, han aparecido múltiples soluciones que tienen como objetivo superar las limitaciones del modelo (BGP, NAT, LISP, etc.). Aun así, todas estas soluciones terminan restringidas por el mismo modelo de red que intentan mejorar. Dado esto, la mayoría de las soluciones terminan simplemente rompiendo y parcheando la pila misma de TCP/IP. Con el objetivo de resolver esos problemas, la Recursive InterNetwork Architecture (RINA) propone una nueva arquitectura de Internet que vuelve a las raíces de la comunicación en red. En lugar de parchear la pila actual de TCP/IP, RINA propone una pila de red recursiva con enfoque en la comunicación entre procesos, donde cada capa, llamada Distributed IPC Facility (DIF), realiza el mismo conjunto de tareas. Mientras realizan las mismas tareas, las DIF de RINA son completamente configurables por medio de políticas programables, definiciones de cómo realizar tales tareas. Además, RINA brinda soporte completo para servicios de QoS por medio de los Cubos QoS, o clases de QoS que definen las capacidades de cada DIF. Con el uso de los Cubos QoS, RINA es capaz de proporcionar una forma estandarizada de expresar las capacidades de cada capa. Además, dada esa información, RINA también permite que las aplicaciones y los procesos de capas superiores expresen sus requisitos de QoS en términos de latencia aceptada, pérdidas, uso promedio, etc. Las contribuciones en esta tesis sacan provecho de la pila recursiva de RINA y el uso de políticas para proponer y analizar soluciones, antiguas y nuevas, para QoS y escalabilidad, que no serán compatibles con la Internet TCP/IP actual. En términos de mejoras de los servicios de QoS, el trabajo en esta tesis aprovecha la información sobre los requisitos de flujo, proporcionados por las propias aplicaciones, para mejorar las garantías de QoS proporcionadas por la red. Propone el uso de políticas basadas en △Q, proporcionando garantías de QoS mejoradas, que coinciden mejor con los requisitos de los flujos. A diferencia de las soluciones de diferenciación de QoS más simples, donde los servicios de QoS se proporcionan en orden de prioridad, △Q pretende proporcionar un servicio “suficientemente bueno" para todos los flujos en la red, lo que resulta en una repartición de recursos más apropiada. En este trabajo, estas políticas se han probado en redes tipo backbone, que muestran mejoras interesantes con respecto a las soluciones comunes de diferenciación de QoS, como las VPN basadas en MPLS. Además del uso de las políticas de △Q en el núcleo de la red, esta tesis también considera el suministro de servicios de QoS a los usuarios finales, siendo ese el objetivo final de las redes. Para permitir eso, se requiere imponer algunos límites a lo que los usuarios finales pueden enviar a la red, con el fin de limitar la cantidad de tráfico prioritario que usuarios codiciosos puedan enviar. En ese sentido, aunque imponer límites de velocidad estrictos por QoS sería trivial en RINA, también se explora una nueva política de limitación de tasas basada en △Q que pretende limitar la cantidad de tráfico prioritario de una manera más beneficiosa para los usuarios. En términos de escalabilidad, esta tesis también considera diferentes medidas para mejorar el reenvío y el enrutamiento dentro de redes de gran escala. Primero, en cuanto al uso de políticas que podrán beneficiarse de topologías de red específicas, se propone una nueva política de forwarding que combina reglas topológicas, es decir decisiones basadas en la ubicación de nodos, y excepciones, es decir entradas que sobrescriben reglas en caso de error. Con esta política, las costosas búsquedas en tablas grandes se reemplazan con reglas de rápidas y simples basadas en la ubicación de los nodos y su vecindad. Dadas las topologías específicas más comúnmente utilizadas en los grandes centros de datos hoy en día, se encuentra que el uso de la política propuesta es la combinación perfecta para esos escenarios. Pruebas en varias topologías comunes para centros de datos mostraron mejoras claras, que requieren solo una pequeña fracción de toda la información sobre la red, a pesar del gran tamaño de dichas redes, dependiendo esta de la cantidad de fallas concurrentes en la red y no del tamaño de la misma. Además, esta tesis también considera el uso de políticas de enrutamiento topológico para poblar tales excepciones en caso de fallas. El uso de soluciones de enrutamiento topológico dio como resultado la reducción de la complejidad en el cálculo de rutas, junto con un menor número de mensajes de enrutamiento. Además de las soluciones topológicas, también se investiga el uso de otra solución de enrutamiento, no adecuada para el entorno de IP. Específicamente, se muestra como una solución de enrutamiento Landmark, una solución de enrutamiento de la familia de enrutamiento compacto, podría implementarse dentro de RINA. Finalmente, también se dedican esfuerzos a analizar la importancia de la selección de rutas para garantizar los requisitos de QoS y como no se requiere llegar a soluciones extremas, como el uso de conexiones, para proporcionar los servicios requeridos.Postprint (published version

    Contributions to energy-aware demand-response systems using SDN and NFV for fog computing

    Get PDF
    Ever-increasing energy consumption, the depletion of non-renewable resources, the climate impact associated with energy generation, and finite energy-production capacity are important concerns worldwide that drive the urgent creation of new energy management and consumption schemes. In this regard, by leveraging the massive connectivity provided by emerging communications such as the 5G systems, this thesis proposes a long-term sustainable Demand-Response solution for the adaptive and efficient management of available energy consumption for Internet of Things (IoT) infrastructures, in which energy utilization is optimized based on the available supply. In the proposed approach, energy management focuses on consumer devices (e.g., appliances such as a light bulb or a screen). In this regard, by proposing that each consumer device be part of an IoT infrastructure, it is feasible to control its respective consumption. The proposal includes an architecture that uses Network Functions Virtualization (NFV) and Software Defined Networking technologies as enablers to promote the primary use of energy from renewable sources. Associated with architecture, this thesis presents a novel consumption model conditioned on availability in which consumers are part of the management process. To efficiently use the energy from renewable and non-renewable sources, several management strategies are herein proposed, such as the prioritization of the energy supply, workload scheduling using time-shifting capabilities, and quality degradation to decrease- the power demanded by consumers if needed. The adaptive energy management solution is modeled as an Integer Linear Programming, and its complexity has been identified to be NP-Hard. To verify the improvements in energy utilization, an optimal algorithmic solution based on a brute force search has been implemented and evaluated. Because the hardness of the adaptive energy management problem and the non-polynomial growth of its optimal solution, which is limited to energy management for a small number of energy demands (e.g., 10 energy demands) and small values of management mechanisms, several faster suboptimal algorithmic strategies have been proposed and implemented. In this context, at the first stage, we implemented three heuristic strategies: a greedy strategy (GreedyTs), a genetic-algorithm-based solution (GATs), and a dynamic programming approach (DPTs). Then, we incorporated into both the optimal and heuristic strategies a prepartitioning method in which the total set of analyzed services is divided into subsets of smaller size and complexity that are solved iteratively. As a result of the adaptive energy management in this thesis, we present eight strategies, one timal and seven heuristic, that when deployed in communications infrastructures such as the NFV domain, seek the best possible scheduling of demands, which lead to efficient energy utilization. The performance of the algorithmic strategies has been validated through extensive simulations in several scenarios, demonstrating improvements in energy consumption and the processing of energy demands. Additionally, the simulation results revealed that the heuristic approaches produce high-quality solutions close to the optimal while executing among two and seven orders of magnitude faster and with applicability to scenarios with thousands and hundreds of thousands of energy demands. This thesis also explores possible application scenarios of both the proposed architecture for adaptive energy management and algorithmic strategies. In this regard, we present some examples, including adaptive energy management in-home systems and 5G networks slicing, energy-aware management solutions for unmanned aerial vehicles, also known as drones, and applicability for the efficient allocation of spectrum in flex-grid optical networks. Finally, this thesis presents open research problems and discusses other application scenarios and future work.El constante aumento del consumo de energía, el agotamiento de los recursos no renovables, el impacto climático asociado con la generación de energía y la capacidad finita de producción de energía son preocupaciones importantes en todo el mundo que impulsan la creación urgente de nuevos esquemas de consumo y gestión de energía. Al aprovechar la conectividad masiva que brindan las comunicaciones emergentes como los sistemas 5G, esta tesis propone una solución de Respuesta a la Demanda sostenible a largo plazo para la gestión adaptativa y eficiente del consumo de energía disponible para las infraestructuras de Internet of Things (IoT), en el que se optimiza la utilización de la energía en función del suministro disponible. En el enfoque propuesto, la gestión de la energía se centra en los dispositivos de consumo (por ejemplo, electrodomésticos). En este sentido, al proponer que cada dispositivo de consumo sea parte de una infraestructura IoT, es factible controlar su respectivo consumo. La propuesta incluye una arquitectura que utiliza tecnologías de Network Functions Virtualization (NFV) y Software Defined Networking como habilitadores para promover el uso principal de energía de fuentes renovables. Asociada a la arquitectura, esta tesis presenta un modelo de consumo condicionado a la disponibilidad en el que los consumidores son parte del proceso de gestión. Para utilizar eficientemente la energía de fuentes renovables y no renovables, se proponen varias estrategias de gestión, como la priorización del suministro de energía, la programación de la carga de trabajo utilizando capacidades de cambio de tiempo y la degradación de la calidad para disminuir la potencia demandada. La solución de gestión de energía adaptativa se modela como un problema de programación lineal entera con complejidad NP-Hard. Para verificar las mejoras en la utilización de energía, se ha implementado y evaluado una solución algorítmica óptima basada en una búsqueda de fuerza bruta. Debido a la dureza del problema de gestión de energía adaptativa y el crecimiento no polinomial de su solución óptima, que se limita a la gestión de energía para un pequeño número de demandas de energía (por ejemplo, 10 demandas) y pequeños valores de los mecanismos de gestión, varias estrategias algorítmicas subóptimos más rápidos se han propuesto. En este contexto, en la primera etapa, implementamos tres estrategias heurísticas: una estrategia codiciosa (GreedyTs), una solución basada en algoritmos genéticos (GATs) y un enfoque de programación dinámica (DPTs). Luego, incorporamos tanto en la estrategia óptima como en la- heurística un método de prepartición en el que el conjunto total de servicios analizados se divide en subconjuntos de menor tamaño y complejidad que se resuelven iterativamente. Como resultado de la gestión adaptativa de la energía en esta tesis, presentamos ocho estrategias, una óptima y siete heurísticas, que cuando se despliegan en infraestructuras de comunicaciones como el dominio NFV, buscan la mejor programación posible de las demandas, que conduzcan a un uso eficiente de la energía. El desempeño de las estrategias algorítmicas ha sido validado a través de extensas simulaciones en varios escenarios, demostrando mejoras en el consumo de energía y el procesamiento de las demandas de energía. Los resultados de la simulación revelaron que los enfoques heurísticos producen soluciones de alta calidad cercanas a las óptimas mientras se ejecutan entre dos y siete órdenes de magnitud más rápido y con aplicabilidad a escenarios con miles y cientos de miles de demandas de energía. Esta tesis también explora posibles escenarios de aplicación tanto de la arquitectura propuesta para la gestión adaptativa de la energía como de las estrategias algorítmicas. En este sentido, presentamos algunos ejemplos, que incluyen sistemas de gestión de energía adaptativa en el hogar, en 5G networkPostprint (published version

    Resilient and Scalable Forwarding for Software-Defined Networks with P4-Programmable Switches

    Get PDF
    Traditional networking devices support only fixed features and limited configurability. Network softwarization leverages programmable software and hardware platforms to remove those limitations. In this context the concept of programmable data planes allows directly to program the packet processing pipeline of networking devices and create custom control plane algorithms. This flexibility enables the design of novel networking mechanisms where the status quo struggles to meet high demands of next-generation networks like 5G, Internet of Things, cloud computing, and industry 4.0. P4 is the most popular technology to implement programmable data planes. However, programmable data planes, and in particular, the P4 technology, emerged only recently. Thus, P4 support for some well-established networking concepts is still lacking and several issues remain unsolved due to the different characteristics of programmable data planes in comparison to traditional networking. The research of this thesis focuses on two open issues of programmable data planes. First, it develops resilient and efficient forwarding mechanisms for the P4 data plane as there are no satisfying state of the art best practices yet. Second, it enables BIER in high-performance P4 data planes. BIER is a novel, scalable, and efficient transport mechanism for IP multicast traffic which has only very limited support of high-performance forwarding platforms yet. The main results of this thesis are published as 8 peer-reviewed and one post-publication peer-reviewed publication. The results cover the development of suitable resilience mechanisms for P4 data planes, the development and implementation of resilient BIER forwarding in P4, and the extensive evaluations of all developed and implemented mechanisms. Furthermore, the results contain a comprehensive P4 literature study. Two more peer-reviewed papers contain additional content that is not directly related to the main results. They implement congestion avoidance mechanisms in P4 and develop a scheduling concept to find cost-optimized load schedules based on day-ahead forecasts

    Benefits of programmable topological routing policies in RINA-enabled large-scale datacenters

    No full text
    With the proliferation of cloud computing and the expected requirements of future Internet of Things (IoT) and 5G network scenarios, more efficient and scalable Data Centers (DCs) will be required, offering very large pools of computational resources and storage capacity cost-effectively. Looking at todays' commercial DCs, they tend to rely on well-defined leaf-spine Data Center Network (DCN) topologies that not only offer low latency and high bisectional bandwidth, but also enhanced reliability against multiple failures. However, routing and forwarding solutions in such DCNs are typically based on IP, thus suffering from its limited routing scalability. In this work, we quantitatively evaluate the benefits that the Recursive InterNetwork Architecture (RINA) can bring into commercial DCNs. To this goal, we propose rule-based topological routing and forwarding policies tailored to the characteristics of publicly available Google's and Facebook's DCNs. These policies can be programmed in a RINA-enabled environment, enabling fast forwarding decisions in most scenarios with merely neighboring node information. Upon DCN failures, invalid forwarding rules are overwritten by exceptions. Numerical results show that the scalability of our proposal depends on the number of concurrent failures in the DCN rather than its size (e.g., number of nodes/links), dramatically reducing the total amount of routing and forwarding information to be stored at nodes. Furthermore, as routing information is only disseminated upon failures across the DCN, the associated communication cost of our proposals largely outperforms that of the traditional IP-based solutions.Peer ReviewedPostprint (published version

    Benefits of programmable topological routing policies in RINA-enabled large-scale datacenters

    No full text
    With the proliferation of cloud computing and the expected requirements of future Internet of Things (IoT) and 5G network scenarios, more efficient and scalable Data Centers (DCs) will be required, offering very large pools of computational resources and storage capacity cost-effectively. Looking at todays' commercial DCs, they tend to rely on well-defined leaf-spine Data Center Network (DCN) topologies that not only offer low latency and high bisectional bandwidth, but also enhanced reliability against multiple failures. However, routing and forwarding solutions in such DCNs are typically based on IP, thus suffering from its limited routing scalability. In this work, we quantitatively evaluate the benefits that the Recursive InterNetwork Architecture (RINA) can bring into commercial DCNs. To this goal, we propose rule-based topological routing and forwarding policies tailored to the characteristics of publicly available Google's and Facebook's DCNs. These policies can be programmed in a RINA-enabled environment, enabling fast forwarding decisions in most scenarios with merely neighboring node information. Upon DCN failures, invalid forwarding rules are overwritten by exceptions. Numerical results show that the scalability of our proposal depends on the number of concurrent failures in the DCN rather than its size (e.g., number of nodes/links), dramatically reducing the total amount of routing and forwarding information to be stored at nodes. Furthermore, as routing information is only disseminated upon failures across the DCN, the associated communication cost of our proposals largely outperforms that of the traditional IP-based solutions.Peer Reviewe

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum
    corecore