23 research outputs found

    Efficient IP-level network topology capture

    Get PDF
    International audienceLarge-scale distributed traceroute-based measurement sys- tems are used to obtain the topology of the internet at the IP-level and can be used to monitor and understand the behavior of the net- work. However, existing approaches to measuring the public IPv4 net- work space often require several days to obtain a full graph, which is too slow to capture much of the network's dynamics. This paper presents a new network topology capture algorithm, NTC, which aims to bet- ter capture network dynamics through accelerated probing, reducing the probing load while maintaining good coverage. There are two novel as- pects to our approach: it focuses on obtaining the network graph rather than a full set of individual traces, and it uses past probing results in a new, adaptive, way to guide future probing. We study the performance of our algorithm on real traces and demonstrate outstanding improved performance compared to existing work.Les systèmes de mesure distribué à grande échelle basés sur l'outil Traceroute sont utilisés pour obtenir la topologie de l'internet au niveau IP et peuvent être utilisés pour surveiller et comprendre le comportement du réseau sous-jascent. Cependant, les approches existantes pour mesurer l'espace public IPv4 du réseau Internet nécessitent souvent plusieurs jours pour obtenir un graphe complet, ce qui est trop lent pour capturer une grande partie de la dynamique du réseau. Cet article présente un nouvel algorithme pour la capture de la topologie du réseau, NTC, visant à cibler la dynamique du réseau à travers l'accélération de sondage, ce qui réduit la charge de la mesure, tout en maintenant une bonne couverture. Il ya deux nouveaux aspects à notre approche: l'algorithme se concentre sur l'obtention du graphe du réseau plutôt que d'effectuer un ensemble complet de traces individuelles, et il utilise les résultats de sondage précédentes de façon à adapter la mesure et de réduire les sondes envoyées. Nous étudions les performances de notre algorithme sur des traces réelles et démontrons la performance accrue de notre approche par rapport aux travaux existants

    Measuring And Improving Internet Video Quality Of Experience

    Get PDF
    Streaming multimedia content over the IP-network is poised to be the dominant Internet traffic for the coming decade, predicted to account for more than 91% of all consumer traffic in the coming years. Streaming multimedia content ranges from Internet television (IPTV), video on demand (VoD), peer-to-peer streaming, and 3D television over IP to name a few. Widespread acceptance, growth, and subscriber retention are contingent upon network providers assuring superior Quality of Experience (QoE) on top of todays Internet. This work presents the first empirical understanding of Internet’s video-QoE capabilities, and tools and protocols to efficiently infer and improve them. To infer video-QoE at arbitrary nodes in the Internet, we design and implement MintMOS: a lightweight, real-time, noreference framework for capturing perceptual quality. We demonstrate that MintMOS’s projections closely match with subjective surveys in accessing perceptual quality. We use MintMOS to characterize Internet video-QoE both at the link level and end-to-end path level. As an input to our study, we use extensive measurements from a large number of Internet paths obtained from various measurement overlays deployed using PlanetLab. Link level degradations of intra– and inter–ISP Internet links are studied to create an empirical understanding of their shortcomings and ways to overcome them. Our studies show that intra–ISP links are often poorly engineered compared to peering links, and that iii degradations are induced due to transient network load imbalance within an ISP. Initial results also indicate that overlay networks could be a promising way to avoid such ISPs in times of degradations. A large number of end-to-end Internet paths are probed and we measure delay, jitter, and loss rates. The measurement data is analyzed offline to identify ways to enable a source to select alternate paths in an overlay network to improve video-QoE, without the need for background monitoring or apriori knowledge of path characteristics. We establish that for any unstructured overlay of N nodes, it is sufficient to reroute key frames using a random subset of k nodes in the overlay, where k is bounded by O(lnN). We analyze various properties of such random subsets to derive simple, scalable, and an efficient path selection strategy that results in a k-fold increase in path options for any source-destination pair; options that consistently outperform Internet path selection. Finally, we design a prototype called source initiated frame restoration (SIFR) that employs random subsets to derive alternate paths and demonstrate its effectiveness in improving Internet video-QoE

    Interdomain Route Leak Mitigation: A Pragmatic Approach

    Get PDF
    The Internet has grown to support many vital functions, but it is not administered by any central authority. Rather, the many smaller networks that make up the Internet - called Autonomous Systems (ASes) - independently manage their own distinct host address space and routing policy. Routers at the borders between ASes exchange information about how to reach remote IP prefixes with neighboring networks over the control plane with the Border Gateway Protocol (BGP). This inter-AS communication connects hosts across AS boundaries to build the illusion of one large, unified global network - the Internet. Unfortunately, BGP is a dated protocol that allows ASes to inject virtually any routing information into the control plane. The Internet’s decentralized administrative structure means that ASes lack visibility of the relationships and policies of other networks, and have little means of vetting the information they receive. Routes are global, connecting hosts around the world, but AS operators can only see routes exchanged between their own network and directly connected neighbor networks. This mismatch between global route scope and local network operator visibility gives rise to adverse routing events like route leaks, which occur when an AS advertises a route that should have been kept within its own network by mistake. In this work, we explore our thesis: that malicious and unintentional route leaks threaten Internet availability, but pragmatic solutions can mitigate their impact. Leaks effectively reroute traffic meant for the leak destination along the leak path. This diversion of flows onto unexpected paths can cause broad disruption for hosts attempting to reach the leak destination, as well as obstruct the normal traffic on the leak path. These events are usually due to misconfiguration and not malicious activity, but we show in our initial work that vrouting-capable adversaries can weaponize route leaks and fraudulent path advertisements to enhance data plane attacks on Internet infrastructure and services. Existing solutions like Internet Routing Registry (IRR) filtering have not succeeded in solving the route leak problem, as globally disruptive route leaks still periodically interrupt the normal functioning of the Internet. We examine one relatively new solution - Peerlocking or defensive AS PATH filtering - where ASes exchange toplogical information to secure their networks. Our measurements reveal that Peerlock is already deployed in defense of the largest ASes, but has found little purchase elsewhere. We conclude by introducing a novel leak defense system, Corelock, designed to provide Peerlock-like protection without the scalability concerns that have limited Peerlock’s scope. Corelock builds meaningful route leak filters from globally distributed route collectors and can be deployed without cooperation from other network

    An Overview of Internet Measurements:Fundamentals, Techniques, and Trends

    Full text link
    The Internet presents great challenges to the characterization of its structure and behavior. Different reasons contribute to this situation, including a huge user community, a large range of applications, equipment heterogeneity, distributed administration, vast geographic coverage, and the dynamism that are typical of the current Internet. In order to deal with these challenges, several measurement-based approaches have been recently proposed to estimate and better understand the behavior, dynamics, and properties of the Internet. The set of these measurement-based techniques composes the Internet Measurements area of research. This overview paper covers the Internet Measurements area by presenting measurement-based tools and methods that directly influence other conventional areas, such as network design and planning, traffic engineering, quality of service, and network management

    Resilient communications in smart grids

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2018As redes elétricas, algumas já centenárias, foram concebidas para uma realidade bastante diferente da actual. O facto de terem sido desenhadas para transportar e distribuir a energia de forma unidirecional, torna a infraestrutura rígida, causando problemas em termos de escalabilidade e dificulta a sua evolução. Conhecidas questões ambientais têm levado a que a geração de energia baseada em combustíveis fosseis seja substituída pela geração através de fontes de energia renováveis. Esta situação motivou a criação de incentivos ao investimento nas fontes de energia renováveis, o que levou a que cada vez mais consumidores apostem na microgeração. Estas alterações causaram uma mudança na forma como é feita a produção e distribuição de energia elétrica, com uma aposta crescente na interligação entre as várias fontes ao longo da infraestrutura, tornando a gestão destas redes uma tarefa extremamente complexa. Com o crescimento significativo de consumidores que também podem ser produtores, torna-se essencial uma coordenação cuidada na injeção de energia na rede. Este facto, aliado à crescente utilização de energia elétrica, faz com que a manutenção da estabilidade da rede seja um enorme desafio. As redes inteligentes, ou smart grids, propõem resolver muitos dos problemas que surgiram com esta alteração do paradigma de consumo/produção de energia elétrica. Os componentes da rede passam a comunicar uns com os outros, tornando a rede eléctrica bidirecional, facilitando assim a sua manutenção e gestão. A possibilidade de constante troca de informação entre todos os componentes que constituem a smart grid permite uma reação imediata relativamente às ações dos produtores e consumidores de energia elétrica. No entanto, com esta alteração de paradigma surgiram também muitos desafios. Nomeadamente, a necessidade de comunicação entre os equipamentos existentes nas smart grids leva a que as redes de comunicação tenham de cobrir grandes áreas. Essa complexidade aumenta quando a gestão necessita de ser feita ao nível de cada equipamento e não de forma global. Isto ´e devido ao facto de nas redes de comunicação tradicionais, o plano de controlo e o de dados estarem no mesmo equipamento, o que leva a que o seu controlo seja difícil e propício a erros. Este controlo descentralizado dificulta também a reorganização da rede quando ocorrem faltas pelo facto de não existir um dispositivo que tenha o conhecimento completo da rede. A adaptação rápida a faltas de forma a tornar a comunicação resiliente tem grande importância em redes sensíveis a latência como é o caso da smart grid, pelo que mecanismos eficientes de tolerância a faltas devem ser implementados. As redes definidas por software, ou Software Defined Networks (SDN), surgem como uma potencial solução para estes problemas. Através da separação entre o plano de controlo e o plano de dados, permite a centralização lógica do controlo da rede no controlador. Para tal, é necessário adicionar uma camada de comunicação entre o controlador e os dispositivos de rede, através de um protocolo como o Openflow. Esta separação reduz a complexidade da gestão da rede e a centralização lógica torna possível programar a rede de forma global, de modo a aplicar as políticas pretendidas. Estes fatores tornam a SDN uma soluçãoo interessante para utilizar em smart grids. Esta tese investiga formas de tornar a rede de comunicações empregue numa smart grid resiliente a faltas. Pelas vantagens mencionadas anteriormente, é usada uma solução baseada em SDN, sendo propostos dois módulos essenciais. O primeiro tem como objectivo a monitorização segura da rede, permitindo obter em tempo real métricas como largura de banda, latência e taxa de erro. O segundo módulo trata do roteamento e engenharia de tráfego, utilizando a informação fornecida pelo módulo de monitorização de forma a que os componentes da smart grid comuniquem entre si, garantindo que os requisitos das aplicações são cumpridos. Dada a criticidade da rede elétrica e a importância das comunicações na smart grid, os mecanismos desenvolvidos toleram faltas, quer de tipo malicioso, quer de tipo acidental.The evolution on how electricity is produced and consumed has made the management of power grids an extremely complex task. Today’s centenary power grids were not designed to fit a new reality where consumers can also be producers, or the impressive increase in consumption caused by more sophisticated and powerful appliances. Smart Grids have been prepared as a solution to cope with this problem, by supporting more sophisticated communications among all the components, allowing the grid to react quickly to changes in both consumption or production of energy. On the other hand, resorting to information and communication technologies (ICT) brings some challenges, namely, managing network devices at this scale and assuring that the strict communication requirements are fulfilled is a dauting task. Software Defined Networks (SDN) can address some of these problems by separating the control and data planes, and logically centralizing network control in a controller. The centralised control has the ability to observe the current state of the network from a vantage point, and programatically react based on that view, making the management task substantially easier. In this thesis we provide a solution for a resilient communications network for Smart Grids based on SDN. As Smart Grids are very sensitive to network issues, such as latency and packet loss, it is important to detect and react to any fault in a timely manner. To achieve this we propose and develop two core modules, a network monitor and a routing and traffic engineering module. The first is a solution for monitoring with the goal to obtain a global view of the current state of the network. The solution is secure, allowing malicious attempts to subvert this module to be detected in a timely manner. This information is then used by the second module to make routing decisions. The routing and traffic engineering module ensures that the communications among the smart grid components are possible and fulfils the strict requirements of the Smart Grid

    Seamless connectivity:investigating implementation challenges of multibroker MQTT platform for smart environmental monitoring

    Get PDF
    Abstract. This thesis explores the performance and efficiency of MQTT-based infrastructure Internet of Things (IoT) sensor networks for smart environment. The study focuses on the impact of network latency and broker switching in distributed multi-broker MQTT platforms. The research involves three case studies: a cloud-based multi-broker deployment, a Local Area Network (LAN)-based multi-broker deployment, and a multi-layer LAN network-based multi-broker deployment. The research is guided by three objectives: quantifying and analyzing the latency of multi-broker MQTT platforms; investigating the benefits of distributed brokers for edge users; and assessing the impact of switching latency at applications. This thesis ultimately seeks to answer three key questions related to network and switching latency, the merits of distributed brokers, and the influence of switching latency on the reliability of end-user applications

    Monitoring Internet censorship: the case of UBICA

    Get PDF
    As a consequence of the recent debate about restrictions in the access to content on the Internet, a strong motivation has arisen for censorship monitoring: an independent, publicly available and global watch on Internet censorship activities is a necessary goal to be pursued in order to guard citizens' right of access to information. Several techniques to enforce censorship on the Internet are known in literature, differing in terms of transparency towards the user, selectivity in blocking specific resources or whole groups of services, collateral effects outside the administrative borders of their intended application. Monitoring censorship is also complicated by the dynamic nature of multiple aspects of this phenomenon, the number and diversity of resources targeted by censorship and its global scale. In the present Thesis an analysis of literature on internet censorship and available solutions for censorship detection has been performed, characterizing censorship enforcement techniques and censorship detection techniques and tools. The available platforms and tools for censorship detection have been found falling short of providing a comprehensive monitoring platform able to manage a diverse set of measurement vantage points and a reporting interface continuously updated with the results of automated censorship analysis. The candidate proposes a design of such a platform, UBICA, along with a prototypical implementation whose effectiveness has been experimentally validated in global monitoring campaigns. The results of the validation are discussed, confirming the effectiveness of the proposed design and suggesting future enhancements and research

    Network monitoring in public clouds: issues, methodologies, and applications

    Get PDF
    Cloud computing adoption is rapidly growing thanks to the carried large technical and economical advantages. Its effects can be observed also looking at the fast increase of cloud traffic: in accordance with recent forecasts, more than 75\% of the overall datacenter traffic will be cloud traffic by 2018. Accordingly, huge investments have been made by providers in network infrastructures. Networks of geographically distributed datacenters have been built, which require efficient and accurate monitoring activities to be operated. However, providers rarely expose information about the state of cloud networks or their design, and seldom make promises about their performance. In this scenario, cloud customers therefore have to cope with performance unpredictability in spite of the primary role played by the network. Indeed, according to the deployment practices adopted and the functional separation of the application layers often implemented, the network heavily influences the performance of the cloud services, also impacting costs and revenues. In this thesis cloud networks are investigated enforcing non-cooperative approaches, i.e.~that do not require access to any information restricted to entities involved in the cloud service provision. A platform to monitor cloud networks from the point of view of the customer is presented. Such a platform enables general customers---even those with limited expertise in the configuration and the management of cloud resources---to obtain valuable information about the state of the cloud network, according to a set of factors under their control. A detailed characterization of the cloud network and of its performance is provided, thanks to extensive experimentations performed during the last years on the infrastructures of the two leading cloud providers (Amazon Web Services and Microsoft Azure). The information base gathered by enforcing the proposed approaches allows customers to better understand the characteristics of these complex network infrastructures. Moreover, experimental results are also useful to the provider for understanding the quality of service perceived by customers. By properly interpreting the obtained results, usage guidelines can be devised which allow to enhance the achievable performance and reduce costs. As a particular case study, the thesis also shows how monitoring information can be leveraged by the customer to implement convenient mechanisms to scale cloud resources without any a priori knowledge. More in general, we believe that this thesis provides a better-defined picture of the characteristics of the complex cloud network infrastructures, also providing the scientific community with useful tools for characterizing them in the future
    corecore