585 research outputs found

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    Use of locator/identifier separation to improve the future internet routing system

    Get PDF
    The Internet evolved from its early days of being a small research network to become a critical infrastructure many organizations and individuals rely on. One dimension of this evolution is the continuous growth of the number of participants in the network, far beyond what the initial designers had in mind. While it does work today, it is widely believed that the current design of the global routing system cannot scale to accommodate future challenges. In 2006 an Internet Architecture Board (IAB) workshop was held to develop a shared understanding of the Internet routing system scalability issues faced by the large backbone operators. The participants documented in RFC 4984 their belief that "routing scalability is the most important problem facing the Internet today and must be solved." A potential solution to the routing scalability problem is ending the semantic overloading of Internet addresses, by separating node location from identity. Several proposals exist to apply this idea to current Internet addressing, among which the Locator/Identifier Separation Protocol (LISP) is the only one already being shipped in production routers. Separating locators from identifiers results in another level of indirection, and introduces a new problem: how to determine location, when the identity is known. The first part of our work analyzes existing proposals for systems that map identifiers to locators and proposes an alternative system, within the LISP ecosystem. We created a large-scale Internet topology simulator and used it to compare the performance of three mapping systems: LISP-DHT, LISP+ALT and the proposed LISP-TREE. We analyzed and contrasted their architectural properties as well. The monitoring projects that supplied Internet routing table growth data over a large timespan inspired us to create LISPmon, a monitoring platform aimed at collecting, storing and presenting data gathered from the LISP pilot network, early in the deployment of the LISP protocol. The project web site and collected data is publicly available and will assist researchers in studying the evolution of the LISP mapping system. We also document how the newly introduced LISP network elements fit into the current Internet, advantages and disadvantages of different deployment options, and how the proposed transition mechanism scenarios could affect the evolution of the global routing system. This work is currently available as an active Internet Engineering Task Force (IETF) Internet Draft. The second part looks at the problem of efficient one-to-many communications, assuming a routing system that implements the above mentioned locator/identifier split paradigm. We propose a network layer protocol for efficient live streaming. It is incrementally deployable, with changes required only in the same border routers that should be upgraded to support locator/identifier separation. Our proof-of-concept Linux kernel implementation shows the feasibility of the protocol, and our comparison to popular peer-to-peer live streaming systems indicates important savings in inter-domain traffic. We believe LISP has considerable potential of getting adopted, and an important aspect of this work is how it might contribute towards a better mapping system design, by showing the weaknesses of current favorites and proposing alternatives. The presented results are an important step forward in addressing the routing scalability problem described in RFC 4984, and improving the delivery of live streaming video over the Internet

    Machine Learning and Big Data Methodologies for Network Traffic Monitoring

    Get PDF
    Over the past 20 years, the Internet saw an exponential grown of traffic, users, services and applications. Currently, it is estimated that the Internet is used everyday by more than 3.6 billions users, who generate 20 TB of traffic per second. Such a huge amount of data challenge network managers and analysts to understand how the network is performing, how users are accessing resources, how to properly control and manage the infrastructure, and how to detect possible threats. Along with mathematical, statistical, and set theory methodologies machine learning and big data approaches have emerged to build systems that aim at automatically extracting information from the raw data that the network monitoring infrastructures offer. In this thesis I will address different network monitoring solutions, evaluating several methodologies and scenarios. I will show how following a common workflow, it is possible to exploit mathematical, statistical, set theory, and machine learning methodologies to extract meaningful information from the raw data. Particular attention will be given to machine learning and big data methodologies such as DBSCAN, and the Apache Spark big data framework. The results show that despite being able to take advantage of mathematical, statistical, and set theory tools to characterize a problem, machine learning methodologies are very useful to discover hidden information about the raw data. Using DBSCAN clustering algorithm, I will show how to use YouLighter, an unsupervised methodology to group caches serving YouTube traffic into edge-nodes, and latter by using the notion of Pattern Dissimilarity, how to identify changes in their usage over time. By using YouLighter over 10-month long races, I will pinpoint sudden changes in the YouTube edge-nodes usage, changes that also impair the end users’ Quality of Experience. I will also apply DBSCAN in the deployment of SeLINA, a self-tuning tool implemented in the Apache Spark big data framework to autonomously extract knowledge from network traffic measurements. By using SeLINA, I will show how to automatically detect the changes of the YouTube CDN previously highlighted by YouLighter. Along with these machine learning studies, I will show how to use mathematical and set theory methodologies to investigate the browsing habits of Internauts. By using a two weeks dataset, I will show how over this period, the Internauts continue discovering new websites. Moreover, I will show that by using only DNS information to build a profile, it is hard to build a reliable profiler. Instead, by exploiting mathematical and statistical tools, I will show how to characterize Anycast-enabled CDNs (A-CDNs). I will show that A-CDNs are widely used either for stateless and stateful services. That A-CDNs are quite popular, as, more than 50% of web users contact an A-CDN every day. And that, stateful services, can benefit of A-CDNs, since their paths are very stable over time, as demonstrated by the presence of only a few anomalies in their Round Trip Time. Finally, I will conclude by showing how I used BGPStream an open-source software framework for the analysis of both historical and real-time Border Gateway Protocol (BGP) measurement data. By using BGPStream in real-time mode I will show how I detected a Multiple Origin AS (MOAS) event, and how I studies the black-holing community propagation, showing the effect of this community in the network. Then, by using BGPStream in historical mode, and the Apache Spark big data framework over 16 years of data, I will show different results such as the continuous growth of IPv4 prefixes, and the growth of MOAS events over time. All these studies have the aim of showing how monitoring is a fundamental task in different scenarios. In particular, highlighting the importance of machine learning and of big data methodologies

    Ontwerp en evaluatie van content distributie netwerken voor multimediale streaming diensten.

    Get PDF
    Traditionele Internetgebaseerde diensten voor het verspreiden van bestanden, zoals Web browsen en het versturen van e-mails, worden aangeboden via één centrale server. Meer recente netwerkdiensten zoals interactieve digitale televisie of video-op-aanvraag vereisen echter hoge kwaliteitsgaranties (QoS), zoals een lage en constante netwerkvertraging, en verbruiken een aanzienlijke hoeveelheid bandbreedte op het netwerk. Architecturen met één centrale server kunnen deze garanties moeilijk bieden en voldoen daarom niet meer aan de hoge eisen van de volgende generatie multimediatoepassingen. In dit onderzoek worden daarom nieuwe netwerkarchitecturen bestudeerd, die een dergelijke dienstkwaliteit kunnen ondersteunen. Zowel peer-to-peer mechanismes, zoals bij het uitwisselen van muziekbestanden tussen eindgebruikers, als servergebaseerde oplossingen, zoals gedistribueerde caches en content distributie netwerken (CDN's), komen aan bod. Afhankelijk van de bestudeerde dienst en de gebruikte netwerktechnologieën en -architectuur, worden gecentraliseerde algoritmen voor netwerkontwerp voorgesteld. Deze algoritmen optimaliseren de plaatsing van de servers of netwerkcaches en bepalen de nodige capaciteit van de servers en netwerklinks. De dynamische plaatsing van de aangeboden bestanden in de verschillende netwerkelementen wordt aangepast aan de heersende staat van het netwerk en aan de variërende aanvraagpatronen van de eindgebruikers. Serverselectie, herroutering van aanvragen en het verspreiden van de belasting over het hele netwerk komen hierbij ook aan bod

    Content Distribution by Multiple Multicast Trees and Intersession Cooperation: Optimal Algorithms and Approximations

    Full text link
    In traditional massive content distribution with multiple sessions, the sessions form separate overlay networks and operate independently, where some sessions may suffer from insufficient resources even though other sessions have excessive resources. To cope with this problem, we consider the universal swarming approach, which allows multiple sessions to cooperate with each other. We formulate the problem of finding the optimal resource allocation to maximize the sum of the session utilities and present a subgradient algorithm which converges to the optimal solution in the time-average sense. The solution involves an NP-hard subproblem of finding a minimum-cost Steiner tree. We cope with this difficulty by using a column generation method, which reduces the number of Steiner-tree computations. Furthermore, we allow the use of approximate solutions to the Steiner-tree subproblem. We show that the approximation ratio to the overall problem turns out to be no less than the reciprocal of the approximation ratio to the Steiner-tree subproblem. Simulation results demonstrate that universal swarming improves the performance of resource-poor sessions with negligible impact to resource-rich sessions. The proposed approach and algorithm are expected to be useful for infrastructure-based content distribution networks with long-lasting sessions and relatively stable network environment

    Development of a system compliant with the Application-Layer Traffic Optimization Protocol

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaWith the ever-increasing Internet usage that is following the start of the new decade, the need to optimize this world-scale network of computers becomes a big priority in the technological sphere that has the number of users rising, as are the Quality of Service (QoS) demands by applications in domains such as media streaming or virtual reality. In the face of rising traffic and stricter application demands, a better understand ing of how Internet Service Providers (ISPs) should manage their assets is needed. An important concern regards to how applications utilize the underlying network infras tructure over which they reside. Most of these applications act with little regard for ISP preferences, as exemplified by their lack of care in achieving traffic locality during their operation, which would be a preferable feature for network administrators, and that could also improve application performance. However, even a best-effort attempt by applications to cooperate will hardly succeed if ISP policies aren’t clearly commu nicated to them. Therefore, a system to bridge layer interests has much potential in helping achieve a mutually beneficial scenario. The main focus of this thesis is the Application-Layer Traffic Optimization (ALTO) work ing group, which was formed by the Internet Engineering Task Force (IETF) to explore standardizations for network information retrieval. This group specified a request response protocol where authoritative entities provide resources containing network status information and administrative preferences. Sharing of infrastructural insight is done with the intent of enabling a cooperative environment, between the network overlay and underlay, during application operations, to obtain better infrastructural re sourcefulness and the consequential minimization of the associated operational costs. This work gives an overview of the historical network tussle between applications and service providers, presents the ALTO working group’s project as a solution, im plements an extended system built upon their ideas, and finally verifies the developed system’s efficiency, in a simulation, when compared to classical alternatives.Com o acrescido uso da Internet que acompanha o início da nova década, a necessidade de otimizar esta rede global de computadores passa a ser uma grande prioridade na esfera tecnológica que vê o seu número de utilizadores a aumentar, assim como a exigência, por parte das aplicações, de novos padrões de Qualidade de Serviço (QoS), como visto em domínios de transmissão de conteúdo multimédia em tempo real e em experiências de realidade virtual. Face ao aumento de tráfego e aos padrões de exigência aplicacional mais restritos, é necessário melhor compreender como os fornecedores de serviços Internet (ISPs) devem gerir os seus recursos. Um ponto fulcral é como aplicações utilizam os seus recursos da rede, onde muitas destas não têm consideração pelas preferências dos ISPs, como exemplificado pela sua falta de esforço em localizar tráfego, onde o contrário seria preferível por administradores de rede e teria potencial para melhorar o desempenho aplicacional. Uma tentativa de melhor esforço, por parte das aplicações, em resolver este problema, não será bem-sucedida se as preferências administrativas não forem claramente comunicadas. Portanto, um sistema que sirva de ponte de comunicação entre camadas pode potenciar um cenário mutuamente benéfico. O foco principal desta tese é o grupo de trabalho Application-Layer Traffic Optimization (ALTO), que foi formado pelo Internet Engineering Task Force (IETF) para explorar estandardizações para recolha de informação da rede. Este grupo especificou um protocolo onde entidades autoritárias disponibilizam recursos com informação de estado de rede, e preferências administrativas. A partilha de conhecimento infraestrutural é feita para possibilitar um ambiente cooperativo entre redes overlay e underlay, para uma mais eficiente utilização de recursos e a consequente minimização de custos operacionais. É pretendido dar uma visão da histórica disputa entre aplicações e ISPs, assim como apresentar o projeto do grupo de trabalho ALTO como solução, implementar e melhorar sobre as suas ideias, e finalmente verificar a eficiência do sistema numa simulação, quando comparado com alternativas clássicas
    corecore