95 research outputs found

    HIDRA: Hierarchical Inter-Domain Routing Architecture

    Get PDF
    As the Internet continues to expand, the global default-free zone (DFZ) forwarding table has begun to grow faster than hardware can economically keep pace with. Various policies are in place to mitigate this growth rate, but current projections indicate policy alone is inadequate. As such, a number of technical solutions have been proposed. This work builds on many of these proposed solutions, and furthers the debate surrounding the resolution to this problem. It discusses several design decisions necessary to any proposed solution, and based on these tradeoffs it proposes a Hierarchical Inter-Domain Routing Architecture - HIDRA, a comprehensive architecture with a plausible deployment scenario. The architecture uses a locator/identifier split encapsulation scheme to attenuate both the immediate size of the DFZ forwarding table, and the projected growth rate. This solution is based off the usage of an already existing number allocation policy - Autonomous System Numbers (ASNs). HIDRA has been deployed to a sandbox network in a proof-of-concept test, yielding promising results

    Use of locator/identifier separation to improve the future internet routing system

    Get PDF
    The Internet evolved from its early days of being a small research network to become a critical infrastructure many organizations and individuals rely on. One dimension of this evolution is the continuous growth of the number of participants in the network, far beyond what the initial designers had in mind. While it does work today, it is widely believed that the current design of the global routing system cannot scale to accommodate future challenges. In 2006 an Internet Architecture Board (IAB) workshop was held to develop a shared understanding of the Internet routing system scalability issues faced by the large backbone operators. The participants documented in RFC 4984 their belief that "routing scalability is the most important problem facing the Internet today and must be solved." A potential solution to the routing scalability problem is ending the semantic overloading of Internet addresses, by separating node location from identity. Several proposals exist to apply this idea to current Internet addressing, among which the Locator/Identifier Separation Protocol (LISP) is the only one already being shipped in production routers. Separating locators from identifiers results in another level of indirection, and introduces a new problem: how to determine location, when the identity is known. The first part of our work analyzes existing proposals for systems that map identifiers to locators and proposes an alternative system, within the LISP ecosystem. We created a large-scale Internet topology simulator and used it to compare the performance of three mapping systems: LISP-DHT, LISP+ALT and the proposed LISP-TREE. We analyzed and contrasted their architectural properties as well. The monitoring projects that supplied Internet routing table growth data over a large timespan inspired us to create LISPmon, a monitoring platform aimed at collecting, storing and presenting data gathered from the LISP pilot network, early in the deployment of the LISP protocol. The project web site and collected data is publicly available and will assist researchers in studying the evolution of the LISP mapping system. We also document how the newly introduced LISP network elements fit into the current Internet, advantages and disadvantages of different deployment options, and how the proposed transition mechanism scenarios could affect the evolution of the global routing system. This work is currently available as an active Internet Engineering Task Force (IETF) Internet Draft. The second part looks at the problem of efficient one-to-many communications, assuming a routing system that implements the above mentioned locator/identifier split paradigm. We propose a network layer protocol for efficient live streaming. It is incrementally deployable, with changes required only in the same border routers that should be upgraded to support locator/identifier separation. Our proof-of-concept Linux kernel implementation shows the feasibility of the protocol, and our comparison to popular peer-to-peer live streaming systems indicates important savings in inter-domain traffic. We believe LISP has considerable potential of getting adopted, and an important aspect of this work is how it might contribute towards a better mapping system design, by showing the weaknesses of current favorites and proposing alternatives. The presented results are an important step forward in addressing the routing scalability problem described in RFC 4984, and improving the delivery of live streaming video over the Internet

    Novel architectures and strategies for security offloading

    Get PDF
    Internet has become an indispensable and powerful tool in our modern society. Its ubiquitousness, pervasiveness and applicability have fostered paradigm changes around many aspects of our lives. This phenomena has positioned the network and its services as fundamental assets over which we rely and trust. However, Internet is far from being perfect. It has considerable security issues and vulnerabilities that jeopardize its main core functionalities with negative impact over its players. Furthermore, these vulnerabilities¿ complexities have been amplified along with the evolution of Internet user mobility. In general, Internet security includes both security for the correct network operation and security for the network users and endpoint devices. The former involves the challenges around the Internet core control and management vulnerabilities, while the latter encompasses security vulnerabilities over end users and endpoint devices. Similarly, Internet mobility poses major security challenges ranging from routing complications, connectivity disruptions and lack of global authentication and authorization. The purpose of this thesis is to present the design of novel architectures and strategies for improving Internet security in a non-disruptive manner. Our novel security proposals follow a protection offloading approach. The motives behind this paradigm target the further enhancement of the security protection while minimizing the intrusiveness and disturbance over the Internet routing protocols, its players and users. To accomplish such level of transparency, the envisioned solutions leverage on well-known technologies, namely, Software Defined Networks, Network Function Virtualization and Fog Computing. From the Internet core building blocks, we focus on the vulnerabilities of two key routing protocols that play a fundamental role in the present and the future of the Internet, i.e., the Border Gateway Protocol (BGP) and the Locator-Identifier Split Protocol (LISP). To this purpose, we first investigate current BGP vulnerabilities and countermeasures with emphasis in an unresolved security issue defined as Route Leaks. Therein, we discuss the reasons why different BGP security proposals have failed to be adopted, and the necessity to propose innovative solutions that minimize the impact over the already deployed routing solution. To this end, we propose pragmatic security methodologies to offload the protection with the following advantages: no changes to the BGP protocol, neither dependency on third party information nor on third party security infrastructure, and self-beneficial. Similarly, we research the current LISP vulnerabilities with emphasis on its control plane and mobility support. We leverage its by-design separation of control and data planes to propose an enhanced location-identifier registration process of end point identifiers. This proposal improves the mobility of end users with regards on securing a dynamic traffic steering over the Internet. On the other hand, from the end user and devices perspective we research new paradigms and architectures with the aim of enhancing their protection in a more controllable and consolidated manner. To this end, we propose a new paradigm which shifts the device-centric protection paradigm toward a user-centric protection. Our proposal focus on the decoupling or extending of the security protection from the end devices toward the network edge. It seeks the homogenization of the enforced protection per user independently of the device utilized. We further investigate this paradigm in a mobility user scenario. Similarly, we extend this proposed paradigm to the IoT realm and its intrinsic security challenges. Therein, we propose an alternative to protect both the things, and the services that leverage from them by consolidating the security at the network edge. We validate our proposal by providing experimental results from prof-of-concepts implementations.Internet se ha convertido en una poderosa e indispensable herramienta para nuestra sociedad moderna. Su omnipresencia y aplicabilidad han promovido grandes cambios en diferentes aspectos de nuestras vidas. Este fenómeno ha posicionado a la red y sus servicios como activos fundamentales sobre los que contamos y confiamos. Sin embargo, Internet está lejos de ser perfecto. Tiene considerables problemas de seguridad y vulnerabilidades que ponen en peligro sus principales funcionalidades. Además, las complejidades de estas vulnerabilidades se han ampliado junto con la evolución de la movilidad de usuarios de Internet y su limitado soporte. La seguridad de Internet incluye tanto la seguridad para el correcto funcionamiento de la red como la seguridad para los usuarios y sus dispositivos. El primero implica los desafíos relacionados con las vulnerabilidades de control y gestión de la infraestructura central de Internet, mientras que el segundo abarca las vulnerabilidades de seguridad sobre los usuarios finales y sus dispositivos. Del mismo modo, la movilidad en Internet plantea importantes desafíos de seguridad que van desde las complicaciones de enrutamiento, interrupciones de la conectividad y falta de autenticación y autorización globales. El propósito de esta tesis es presentar el diseño de nuevas arquitecturas y estrategias para mejorar la seguridad de Internet de una manera no perturbadora. Nuestras propuestas de seguridad siguen un enfoque de desacople de la protección. Los motivos detrás de este paradigma apuntan a la mejora adicional de la seguridad mientras que minimizan la intrusividad y la perturbación sobre los protocolos de enrutamiento de Internet, sus actores y usuarios. Para lograr este nivel de transparencia, las soluciones previstas aprovechan nuevas tecnologías, como redes definidas por software (SDN), virtualización de funciones de red (VNF) y computación en niebla. Desde la perspectiva central de Internet, nos centramos en las vulnerabilidades de dos protocolos de enrutamiento clave que desempeñan un papel fundamental en el presente y el futuro de Internet, el Protocolo de Puerta de Enlace Fronterizo (BGP) y el Protocolo de Separación Identificador/Localizador (LISP ). Para ello, primero investigamos las vulnerabilidades y medidas para contrarrestar un problema no resuelto en BGP definido como Route Leaks. Proponemos metodologías pragmáticas de seguridad para desacoplar la protección con las siguientes ventajas: no cambios en el protocolo BGP, cero dependencia en la información de terceros, ni de infraestructura de seguridad de terceros, y de beneficio propio. Del mismo modo, investigamos las vulnerabilidades actuales sobre LISP con énfasis en su plano de control y soporte de movilidad. Aprovechamos la separacçón de sus planos de control y de datos para proponer un proceso mejorado de registro de identificadores de ubicación y punto final, validando de forma segura sus respectivas autorizaciones. Esta propuesta mejora la movilidad de los usuarios finales con respecto a segurar un enrutamiento dinámico del tráfico a través de Internet. En paralelo, desde el punto de vista de usuarios finales y dispositivos investigamos nuevos paradigmas y arquitecturas con el objetivo de mejorar su protección de forma controlable y consolidada. Con este fin, proponemos un nuevo paradigma hacia una protección centrada en el usuario. Nuestra propuesta se centra en el desacoplamiento o ampliación de la protección de seguridad de los dispositivos finales hacia el borde de la red. La misma busca la homogeneización de la protección del usuario independientemente del dispositivo utilizado. Además, investigamos este paradigma en un escenario con movilidad. Validamos nuestra propuesta proporcionando resultados experimentales obtenidos de diferentes experimentos y pruebas de concepto implementados.Postprint (published version

    Novel architectures and strategies for security offloading

    Get PDF
    Internet has become an indispensable and powerful tool in our modern society. Its ubiquitousness, pervasiveness and applicability have fostered paradigm changes around many aspects of our lives. This phenomena has positioned the network and its services as fundamental assets over which we rely and trust. However, Internet is far from being perfect. It has considerable security issues and vulnerabilities that jeopardize its main core functionalities with negative impact over its players. Furthermore, these vulnerabilities¿ complexities have been amplified along with the evolution of Internet user mobility. In general, Internet security includes both security for the correct network operation and security for the network users and endpoint devices. The former involves the challenges around the Internet core control and management vulnerabilities, while the latter encompasses security vulnerabilities over end users and endpoint devices. Similarly, Internet mobility poses major security challenges ranging from routing complications, connectivity disruptions and lack of global authentication and authorization. The purpose of this thesis is to present the design of novel architectures and strategies for improving Internet security in a non-disruptive manner. Our novel security proposals follow a protection offloading approach. The motives behind this paradigm target the further enhancement of the security protection while minimizing the intrusiveness and disturbance over the Internet routing protocols, its players and users. To accomplish such level of transparency, the envisioned solutions leverage on well-known technologies, namely, Software Defined Networks, Network Function Virtualization and Fog Computing. From the Internet core building blocks, we focus on the vulnerabilities of two key routing protocols that play a fundamental role in the present and the future of the Internet, i.e., the Border Gateway Protocol (BGP) and the Locator-Identifier Split Protocol (LISP). To this purpose, we first investigate current BGP vulnerabilities and countermeasures with emphasis in an unresolved security issue defined as Route Leaks. Therein, we discuss the reasons why different BGP security proposals have failed to be adopted, and the necessity to propose innovative solutions that minimize the impact over the already deployed routing solution. To this end, we propose pragmatic security methodologies to offload the protection with the following advantages: no changes to the BGP protocol, neither dependency on third party information nor on third party security infrastructure, and self-beneficial. Similarly, we research the current LISP vulnerabilities with emphasis on its control plane and mobility support. We leverage its by-design separation of control and data planes to propose an enhanced location-identifier registration process of end point identifiers. This proposal improves the mobility of end users with regards on securing a dynamic traffic steering over the Internet. On the other hand, from the end user and devices perspective we research new paradigms and architectures with the aim of enhancing their protection in a more controllable and consolidated manner. To this end, we propose a new paradigm which shifts the device-centric protection paradigm toward a user-centric protection. Our proposal focus on the decoupling or extending of the security protection from the end devices toward the network edge. It seeks the homogenization of the enforced protection per user independently of the device utilized. We further investigate this paradigm in a mobility user scenario. Similarly, we extend this proposed paradigm to the IoT realm and its intrinsic security challenges. Therein, we propose an alternative to protect both the things, and the services that leverage from them by consolidating the security at the network edge. We validate our proposal by providing experimental results from prof-of-concepts implementations.Internet se ha convertido en una poderosa e indispensable herramienta para nuestra sociedad moderna. Su omnipresencia y aplicabilidad han promovido grandes cambios en diferentes aspectos de nuestras vidas. Este fenómeno ha posicionado a la red y sus servicios como activos fundamentales sobre los que contamos y confiamos. Sin embargo, Internet está lejos de ser perfecto. Tiene considerables problemas de seguridad y vulnerabilidades que ponen en peligro sus principales funcionalidades. Además, las complejidades de estas vulnerabilidades se han ampliado junto con la evolución de la movilidad de usuarios de Internet y su limitado soporte. La seguridad de Internet incluye tanto la seguridad para el correcto funcionamiento de la red como la seguridad para los usuarios y sus dispositivos. El primero implica los desafíos relacionados con las vulnerabilidades de control y gestión de la infraestructura central de Internet, mientras que el segundo abarca las vulnerabilidades de seguridad sobre los usuarios finales y sus dispositivos. Del mismo modo, la movilidad en Internet plantea importantes desafíos de seguridad que van desde las complicaciones de enrutamiento, interrupciones de la conectividad y falta de autenticación y autorización globales. El propósito de esta tesis es presentar el diseño de nuevas arquitecturas y estrategias para mejorar la seguridad de Internet de una manera no perturbadora. Nuestras propuestas de seguridad siguen un enfoque de desacople de la protección. Los motivos detrás de este paradigma apuntan a la mejora adicional de la seguridad mientras que minimizan la intrusividad y la perturbación sobre los protocolos de enrutamiento de Internet, sus actores y usuarios. Para lograr este nivel de transparencia, las soluciones previstas aprovechan nuevas tecnologías, como redes definidas por software (SDN), virtualización de funciones de red (VNF) y computación en niebla. Desde la perspectiva central de Internet, nos centramos en las vulnerabilidades de dos protocolos de enrutamiento clave que desempeñan un papel fundamental en el presente y el futuro de Internet, el Protocolo de Puerta de Enlace Fronterizo (BGP) y el Protocolo de Separación Identificador/Localizador (LISP ). Para ello, primero investigamos las vulnerabilidades y medidas para contrarrestar un problema no resuelto en BGP definido como Route Leaks. Proponemos metodologías pragmáticas de seguridad para desacoplar la protección con las siguientes ventajas: no cambios en el protocolo BGP, cero dependencia en la información de terceros, ni de infraestructura de seguridad de terceros, y de beneficio propio. Del mismo modo, investigamos las vulnerabilidades actuales sobre LISP con énfasis en su plano de control y soporte de movilidad. Aprovechamos la separacçón de sus planos de control y de datos para proponer un proceso mejorado de registro de identificadores de ubicación y punto final, validando de forma segura sus respectivas autorizaciones. Esta propuesta mejora la movilidad de los usuarios finales con respecto a segurar un enrutamiento dinámico del tráfico a través de Internet. En paralelo, desde el punto de vista de usuarios finales y dispositivos investigamos nuevos paradigmas y arquitecturas con el objetivo de mejorar su protección de forma controlable y consolidada. Con este fin, proponemos un nuevo paradigma hacia una protección centrada en el usuario. Nuestra propuesta se centra en el desacoplamiento o ampliación de la protección de seguridad de los dispositivos finales hacia el borde de la red. La misma busca la homogeneización de la protección del usuario independientemente del dispositivo utilizado. Además, investigamos este paradigma en un escenario con movilidad. Validamos nuestra propuesta proporcionando resultados experimentales obtenidos de diferentes experimentos y pruebas de concepto implementados

    MPTCP Robustness Against Large-Scale Man-in-the-Middle Attacks

    Get PDF
    International audienceMultipath communications at the Internet scale have been a myth for a long time, with no actual protocol being deployed at large scale. Recently, the Multipath Transmission Control Protocol (MPTCP) extension was standardized and is undergoing rapid adoption in many different use-cases, from mobile to fixed access networks, from data-centers to core networks. Among its major benefits-i.e., reliability thanks to backup path rerouting, through-put increase thanks to link aggregation, and confidentiality being more difficult to intercept a full connection-the latter has attracted lower attention. How effective would be to use MPTCP, or an equivalent multipath transport layer protocol, to exploit multiple Internet-scale paths and decrease the probability of Man-in-the-Middle (MITM) attacks is a question which we try to answer. By analyzing the Autonomous System (AS) level graph, we identify which countries and regions show a higher level of robustness against MITM AS-level attacks, for example due to core cable tapping or route hijacking practices.

    Improving Pan-African research and education networks through traffic engineering: A LISP/SDN approach

    Get PDF
    The UbuntuNet Alliance, a consortium of National Research and Education Networks (NRENs) runs an exclusive data network for education and research in east and southern Africa. Despite a high degree of route redundancy in the Alliance's topology, a large portion of Internet traffic between the NRENs is circuitously routed through Europe. This thesis proposes a performance-based strategy for dynamic ranking of inter-NREN paths to reduce latencies. The thesis makes two contributions: firstly, mapping Africa's inter-NREN topology and quantifying the extent and impact of circuitous routing; and, secondly, a dynamic traffic engineering scheme based on Software Defined Networking (SDN), Locator/Identifier Separation Protocol (LISP) and Reinforcement Learning. To quantify the extent and impact of circuitous routing among Africa's NRENs, active topology discovery was conducted. Traceroute results showed that up to 75% of traffic from African sources to African NRENs went through inter-continental routes and experienced much higher latencies than that of traffic routed within Africa. An efficient mechanism for topology discovery was implemented by incorporating prior knowledge of overlapping paths to minimize redundancy during measurements. Evaluation of the network probing mechanism showed a 47% reduction in packets required to complete measurements. An interactive geospatial topology visualization tool was designed to evaluate how NREN stakeholders could identify routes between NRENs. Usability evaluation showed that users were able to identify routes with an accuracy level of 68%. NRENs are faced with at least three problems to optimize traffic engineering, namely: how to discover alternate end-to-end paths; how to measure and monitor performance of different paths; and how to reconfigure alternate end-to-end paths. This work designed and evaluated a traffic engineering mechanism for dynamic discovery and configuration of alternate inter-NREN paths using SDN, LISP and Reinforcement Learning. A LISP/SDN based traffic engineering mechanism was designed to enable NRENs to dynamically rank alternate gateways. Emulation-based evaluation of the mechanism showed that dynamic path ranking was able to achieve 20% lower latencies compared to the default static path selection. SDN and Reinforcement Learning were used to enable dynamic packet forwarding in a multipath environment, through hop-by-hop ranking of alternate links based on latency and available bandwidth. The solution achieved minimum latencies with significant increases in aggregate throughput compared to static single path packet forwarding. Overall, this thesis provides evidence that integration of LISP, SDN and Reinforcement Learning, as well as ranking and dynamic configuration of paths could help Africa's NRENs to minimise latencies and to achieve better throughputs

    Multi-region routing

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de ComputadoresThis thesis proposes a new inter-domain routing protocol. The Internet's inter-domain routing protocol Border Gateway Protocol (BGP) provides a reachability solution for all domains; however it is also used for purposes outside of routing. In terms of routing BGP su ers from serious problems, such as slow routing convergence and limited scalability. The proposed architecture takes into consideration the current Internet business model and structure. It bene ts from a massively multi-homed Internet to perform multipath routing. The main foundation of this thesis was based on the Dynamic Topological Information Architecture (DTIA). We propose a division of the Internet in regions to contain the network scale where DTIA's routing algorithm is applied. An inter-region routing solution was devised to connect regions; formal proofs were made in order to demonstrate the routing convergence of the protocol. An implementation of the proposed solution was made in the network simulator 2 (ns-2). Results showed that the proposed architecture achieves faster convergence than BGP. Moreover, this thesis' solution improves the algorithm's scalability at the inter-region level, compared to the single region case

    Improving internet inter-domain routing scalability

    Get PDF
    Internetin alueiden välisen reitityksen skaalautuminen on nähty ongelmallisena jo vuosia. Sen välittömiä seurauksia ovat operaattoreille aiheutuvat suuret kustannukset ja tilanteen pahentuessa myös Internetin kasvamisen vaarantuminen. Tähän mennessä toteutetut parannukset ovat vain lieventäneet ongelmia tai viivästyttäneet niiden ilmenemistä. Reitittimien muistia pidetään suurimpana lähitulevaisuuden haasteena, koska reitittimien täytyy pystyä ohjaamaan sisääntulevat paketit nopeasti kohti jotakin jopa useista sadoista tuhansista verkoista. Tämä opinnäytetyö esittelee ongelma-alueen tunnistamalla suurimmat ongelmat sekä niiden perimmäiset syyt, ja kartoittaa näiden ongelmien tärkeyttä. Parannusmenetelmistä muutama oleellisin on esitelty ja analysoitu. Syvempi analyysi kohdistuu työssä ennen kaikkea reititinten muistintarvetta pienentävään Virtual Aggregation -menetelmään, jonka kantavana ideana on sallia virtuaalisten IP-osoiteprefixien käyttö yksittäisten verkkojen sisällä. Työ esittelee myös uuden tavan muodostaa ja käyttää näitä virtuaalisia IP-osoiteprefixejä ja vertailee sitä muihin nopean muistin tarvetta vähentäviin menetelmiin simuloimalla näitä Sprint operaattorin verkkotopologiassa. Simulointitulosten perusteella esittämämme menetelmä kykenee huomattaviin muistisäästöihin välttäen samalla joitain Virtual Aggregation -menetelmästä löydetyistä ongelmista. Menetelmän jatkokehitystä on myös mietitty simulointien pohjalta.For years inter-domain routing scalability has been seen as a problem which increases ISPs costs and may even decelerate the growth of the Internet. Few improvements have been made over the years, but they have only delayed the issue. Router memories (i.e. FIBs) are the most critical concern as they have to be fast and ever larger to handle great amounts of packets to possibly hundreds of thousands of networks. This thesis introduces the problem set by identifying the main issues and their root causes, as well as present analysis on their criticality. The improvement mechanisms are also considered by introducing and comparing few most relevant proposals. Deeper study and analysis in this thesis focuses on Virtual Aggregation which allows networks to individually lower their routers' memory load via the use of virtual IP address pre-fixes. Also, a new solution for allocating Virtual Prefixes and aggregation points for them is introduced and compared against other FIB shrinking mechanisms using extensive simulations on Sprint topology. As a result, the new solution is identified to save FIBs considerably while avoiding some drawbacks found in Virtual Aggregation. Further improvements to the mechanism are also considered although not tested

    Abstracting network policies

    Get PDF
    Almost every human activity in recent years relies either directly or indirectly on the smooth and efficient operation of the Internet. The Internet is an interconnection of multiple autonomous networks that work based on agreed upon policies between various institutions across the world. The network policies guiding an institution’s computer infrastructure both internally (such as firewall relationships) and externally (such as routing relationships) are developed by a diverse group of lawyers, accountants, network administrators, managers amongst others. Network policies developed by this group of individuals are usually done on a white-board in a graph-like format. It is however the responsibility of network administrators to translate and configure the various network policies that have been agreed upon. The configuration of these network policies are generally done on physical devices such as routers, domain name servers, firewalls and other middle boxes. The manual configuration process of such network policies is known to be tedious, time consuming and prone to human error which can lead to various network anomalies in the configuration commands. In recent years, many research projects and corporate organisations have to some level abstracted the network management process with emphasis on network devices (such as Cisco VIRL) or individual network policies (such as Propane). [Continues.]</div

    On the scalability of LISP and advanced overlaid services

    Get PDF
    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed to be the Internet's biggest problem. While consensus on the exact form of the solution is yet to be found, the need for a semantic decoupling of a node's location and identity, often called a location/identity separation, is generally accepted as a promising way forward. Typically, this requires the introduction of new network elements that provide the binding of the two names-paces and caches that avoid hampering router packet forwarding speeds. But due to this increased complexity the solution's scalability is itself questioned. This dissertation evaluates the suitability of using the Locator/ID Separation Protocol (LISP), one of the most successful proposals to follow the location/identity separation guideline, as a solution to the Internet's scalability problem. However, because the deployment of any new architecture depends not only on solving the incumbent's technical problems but also on the added value that it brings, our approach follows two lines. In the first part of the thesis, we develop the analytical tools to evaluate LISP's control plane scalability while in the second we show that the required control/data plane separation provides important benefits that could drive LISP's adoption. As a first step to evaluating LISP's scalability, we propose a methodology for an analytical analysis of cache performance that relies on the working-set theory to estimate traffic locality of reference. One of our main contribution is that we identify the conditions network traffic must comply with for the theory to be applicable and then use the result to develop a model that predicts average cache miss rates. Furthermore, we study the model's suitability for long term cache provisioning and assess the cache's vulnerability in front of malicious users through an extension that accounts for cache polluting traffic. As a last step, we investigate the main sources of locality and their impact on the asymptotic scalability of the LISP cache. An important finding here is that destination popularity distribution can accurately describe cache performance, independent of the much harder to model short term correlations. Under a small set of assumptions, this result finally enables us to characterize asymptotic scalability with respect to the amount of prefixes (Internet growth) and users (growth of the LISP site). We validate the models and discuss the accuracy of our assumptions using several one-day-long packet traces collected at the egress points of a campus and an academic network. To show the added benefits that could drive LISP's adoption, in the second part of the thesis we investigate the possibilities of performing inter-domain multicast and improving intra-domain routing. Although the idea of using overlaid services to improve underlay performance is not new, this dissertation argues that LISP offers the right tools to reliably and easily implement such services due to its reliance on network instead of application layer support. In particular, we present and extensively evaluate Lcast, a network-layer single-source multicast framework designed to merge the robustness and efficiency of IP multicast with the configurability and low deployment cost of application-layer overlays. Additionally, we describe and evaluate LISP-MPS, an architecture capable of exploiting LISP to minimize intra-domain routing tables and ensure, among other, support for multi protocol switching and virtual networks.En menos de cuatro décadas Internet ha evolucionado desde un experimento de laboratorio hasta una infraestructura de alcance mundial, de importancia crítica para negocios y que atiende a las necesidades de casi un tercio de los habitantes del planeta. Con estos números, es difícil tratar de negar la necesidad de escalabilidad de Internet. Sin embargo, el crecimiento orgánico de Internet está aún lejos de finalizar ya que se espera que mil millones de dispositivos nuevos se conecten en el futuro cercano. Así pues, la falta de escalabilidad es el mayor problema al que se enfrenta Internet hoy en día. Aunque la solución definitiva al problema está aún por definir, la necesidad de desacoplar semánticamente la localización e identidad de un nodo, a menudo llamada locator/identifier separation, es generalmente aceptada como un camino prometedor a seguir. Sin embargo, esto requiere la introducción de nuevos dispositivos en la red que unan los dos espacios de nombres disjuntos resultantes y de cachés que almacenen los enlaces temporales entre ellos con el fin de aumentar la velocidad de transmisión de los enrutadores. A raíz de esta complejidad añadida, la escalabilidad de la solución en si misma es también cuestionada. Este trabajo evalúa la idoneidad de utilizar Locator/ID Separation Protocol (LISP), una de las propuestas más exitosas que siguen la pauta locator/identity separation, como una solución para la escalabilidad de la Internet. Con tal fin, desarrollamos las herramientas analíticas para evaluar la escalabilidad del plano de control de LISP pero también para mostrar que la separación de los planos de control y datos proporciona un importante valor añadido que podría impulsar la adopción de LISP. Como primer paso para evaluar la escalabilidad de LISP, proponemos una metodología para un estudio analítico del rendimiento de la caché que se basa en la teoría del working-set para estimar la localidad de referencias. Identificamos las condiciones que el tráfico de red debe cumplir para que la teoría sea aplicable y luego desarrollamos un modelo que predice las tasas medias de fallos de caché con respecto a parámetros de tráfico fácilmente medibles. Por otra parte, para demostrar su versatilidad y para evaluar la vulnerabilidad de la caché frente a usuarios malintencionados, extendemos el modelo para considerar el rendimiento frente a tráfico generado por usuarios maliciosos. Como último paso, investigamos como usar la popularidad de los destinos para estimar el rendimiento de la caché, independientemente de las correlaciones a corto plazo. Bajo un pequeño conjunto de hipótesis conseguimos caracterizar la escalabilidad con respecto a la cantidad de prefijos (el crecimiento de Internet) y los usuarios (crecimiento del sitio LISP). Validamos los modelos y discutimos la exactitud de nuestras suposiciones utilizando varias trazas de paquetes reales. Para mostrar los beneficios adicionales que podrían impulsar la adopción de LISP, también investigamos las posibilidades de realizar multidifusión inter-dominio y la mejora del enrutamiento dentro del dominio. Aunque la idea de utilizar servicios superpuestos para mejorar el rendimiento de la capa subyacente no es nueva, esta tesis sostiene que LISP ofrece las herramientas adecuadas para poner en práctica de forma fiable y fácilmente este tipo de servicios debido a que LISP actúa en la capa de red y no en la capa de aplicación. En particular, presentamos y evaluamos extensamente Lcast, un marco de multidifusión con una sola fuente diseñado para combinar la robustez y eficiencia de la multidifusión IP con la capacidad de configuración y bajo coste de implementación de una capa superpuesta a nivel de aplicación. Además, describimos y evaluamos LISP-MPS, una arquitectura capaz de explotar LISP para minimizar las tablas de enrutamiento intra-dominio y garantizar, entre otras, soporte para conmutación multi-protocolo y redes virtuales
    corecore