40 research outputs found

    A Macroscopic Study of Network Security Threats at the Organizational Level.

    Full text link
    Defenders of today's network are confronted with a large number of malicious activities such as spam, malware, and denial-of-service attacks. Although many studies have been performed on how to mitigate security threats, the interaction between attackers and defenders is like a game of Whac-a-Mole, in which the security community is chasing after attackers rather than helping defenders to build systematic defensive solutions. As a complement to these studies that focus on attackers or end hosts, this thesis studies security threats from the perspective of the organization, the central authority that manages and defends a group of end hosts. This perspective provides a balanced position to understand security problems and to deploy and evaluate defensive solutions. This thesis explores how a macroscopic view of network security from an organization's perspective can be formed to help measure, understand, and mitigate security threats. To realize this goal, we bring together a broad collection of reputation blacklists. We first measure the properties of the malicious sources identified by these blacklists and their impact on an organization. We then aggregate the malicious sources to Internet organizations and characterize the maliciousness of organizations and their evolution over a period of two and half years. Next, we aim to understand the cause of different maliciousness levels in different organizations. By examining the relationship between eight security mismanagement symptoms and the maliciousness of organizations, we find a strong positive correlation between mismanagement and maliciousness. Lastly, motivated by the observation that there are organizations that have a significant fraction of their IP addresses involved in malicious activities, we evaluate the tradeoff of one type of mitigation solution at the organization level --- network takedowns.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116714/1/jingzj_1.pd

    Experience a Faster and More Private Internet in Library and Information Centres with 1.1.1.1 DNS Resolver

    Get PDF
    As a densely populated country with a large user base, the library and information centres (LICs) in India always suffer from a lack of internet speed. Indian LICs frequently employ open DNS resolvers to achieve high internet speeds. In the new normal, the cyberattacks on Indian educational institutions have skyrocketed, necessitating the development of a more secure and quicker DNS resolver. To meet this demand, the paper assesses Cloudflare\u27s 1.1.1.1 DNS resolver and demonstrates its suitability for Indian LICs. To examine the demand of Cloudflare’s 1.1.1.1 DNS resolver in providing the most secure and fastest private network, the researchers use six Windows PCs connected with ten different networks with the same ISP provider and calculate the Internet speed Measurement Lab and the nPerf online free internet speed checker in Chrome browser (v. 96.0). The internet speed was manually collected before and after the configuration and further analysed to evaluate the increase/decrease percentage. The study findings reveal that after the configuration of 1.1.1.1 set up, the internet speed of the chrome browser in the targeted PCs increased 36.33% for download and 73.88% for upload. And from the result, we can conclude that Indian LICs can rely on the 1.1.1.1 DNS resolver to experience fast & secure private network connections. There are several studies that compare two or more DNS resolvers in terms of speed and security, but none of them specifically addresses their usability and benefits in LICs. Although the usage of DNS resolvers at the institutional level is not a new concept, there is still a lack of understanding among library professionals in India about the deployment criteria and benefits. As a result of this research, library professionals may be inspired to use DNS resolvers comprehensively to provide lightning-fast services to their users

    Methods for revealing and reshaping the African Internet Ecosystem as a case study for developing regions: from isolated networks to a connected continent

    Get PDF
    Mención Internacional en el título de doctorWhile connecting end-users worldwide, the Internet increasingly promotes local development by making challenges much simpler to overcome, regardless of the field in which it is used: governance, economy, education, health, etc. However, African Network Information Centre (AfriNIC), the Regional Internet Registry (RIR) of Africa, is characterized by the lowest Internet penetration: 28.6% as of March 2017 compared to an average of 49.7% worldwide according to the International Telecommunication Union (ITU) estimates [139]. Moreover, end-users experience a poor Quality of Service (QoS) provided at high costs. It is thus of interest to enlarge the Internet footprint in such under-connected regions and determine where the situation can be improved. Along these lines, this doctoral thesis thoroughly inspects, using both active and passive data analysis, the critical aspects of the African Internet ecosystem and outlines the milestones of a methodology that could be adopted for achieving similar purposes in other developing regions. The thesis first presents our efforts to help build measurements infrastructures for alleviating the shortage of a diversified range of Vantage Points (VPs) in the region, as we cannot improve what we can not measure. It then unveils our timely and longitudinal inspection of the African interdomain routing using the enhanced RIPE Atlas measurements infrastructure for filling the lack of knowledge of both IPv4 and IPv6 topologies interconnecting local Internet Service Providers (ISPs). It notably proposes reproducible data analysis techniques suitable for the treatment of any set of similar measurements to infer the behavior of ISPs in the region. The results show a large variety of transit habits, which depend on socio-economic factors such as the language, the currency area, or the geographic location of the country in which the ISP operates. They indicate the prevailing dominance of ISPs based outside Africa for the provision of intracontinental paths, but also shed light on the efforts of stakeholders for traffic localization. Next, the thesis investigates the causes and impacts of congestion in the African IXP substrate, as the prevalence of this endemic phenomenon in local Internet markets may hinder their growth. Towards this end, Ark monitors were deployed at six strategically selected local Internet eXchange Points (IXPs) and used for collecting Time-Sequence Latency Probes (TSLP) measurements during a whole year. The analysis of these datasets reveals no evidence of widespread congestion: only 2.2% of the monitored links experienced noticeable indication of congestion, thus promoting peering. The causes of these events were identified during IXP operator interviews, showing how essential collaboration with stakeholders is to understanding the causes of performance degradations. As part of the Internet Society (ISOC) strategy to allow the Internet community to profile the IXPs of a particular region and monitor their evolution, a route-collector data analyzer was then developed and afterward, it was deployed and tested in AfriNIC. This open source web platform titled the “African” Route-collectors Data Analyzer (ARDA) provides metrics, which picture in real-time the status of interconnection at different levels, using public routing information available at local route-collectors with a peering viewpoint of the Internet. The results highlight that a small proportion of Autonomous System Numbers (ASNs) assigned by AfriNIC (17 %) are peering in the region, a fraction that remained static from April to September 2017 despite the significant growth of IXPs in some countries. They show how ARDA can help detect the impact of a policy on the IXP substrate and help ISPs worldwide identify new interconnection opportunities in Africa, the targeted region. Since broadening the underlying network is not useful without appropriately provisioned services to exploit it, the thesis then delves into the availability and utilization of the web infrastructure serving the continent. Towards this end, a comprehensive measurement methodology is applied to collect data from various sources. A focus on Google reveals that its content infrastructure in Africa is, indeed, expanding; nevertheless, much of its web content is still served from the United States (US) and Europe, although being the most popular content source in many African countries. Further, the same analysis is repeated across top global and regional websites, showing that even top African websites prefer to host their content abroad. Following that, the primary bottlenecks faced by Content Providers (CPs) in the region such as the lack of peering between the networks hosting our probes and poorly configured DNS resolvers are explored to outline proposals for further ISP and CP deployments. Considering the above, an option to enrich connectivity and incentivize CPs to establish a presence in the region is to interconnect ISPs present at isolated IXPs by creating a distributed IXP layout spanning the continent. In this respect, the thesis finally provides a four-step interconnection scheme, which parameterizes socio-economic, geographical, and political factors using public datasets. It demonstrates that this constrained solution doubles the percentage of continental intra-African paths, reduces their length, and drastically decreases the median of their Round Trip Times (RTTs) as well as RTTs to ASes hosting the top 10 global and top 10 regional Alexa websites. We hope that quantitatively demonstrating the benefits of this framework will incentivize ISPs to intensify peering and CPs to increase their presence, for enabling fast, affordable, and available access at the Internet frontier.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: David Fernández Cambronero.- Secretario: Alberto García Martínez.- Vocal: Cristel Pelsse

    Optimizing the delivery of multimedia over mobile networks

    Get PDF
    Mención Internacional en el título de doctorThe consumption of multimedia content is moving from a residential environment to mobile phones. Mobile data traffic, driven mostly by video demand, is increasing rapidly and wireless spectrum is becoming a more and more scarce resource. This makes it highly important to operate mobile networks efficiently. To tackle this, recent developments in anticipatory networking schemes make it possible to to predict the future capacity of mobile devices and optimize the allocation of the limited wireless resources. Further, optimizing Quality of Experience—smooth, quick, and high quality playback—is more difficult in the mobile setting, due to the highly dynamic nature of wireless links. A key requirement for achieving, both anticipatory networking schemes and QoE optimization, is estimating the available bandwidth of mobile devices. Ideally, this should be done quickly and with low overhead. In summary, we propose a series of improvements to the delivery of multimedia over mobile networks. We do so, be identifying inefficiencies in the interconnection of mobile operators with the servers hosting content, propose an algorithm to opportunistically create frequent capacity estimations suitable for use in resource optimization solutions and finally propose another algorithm able to estimate the bandwidth class of a device based on minimal traffic in order to identify the ideal streaming quality its connection may support before commencing playback. The main body of this thesis proposes two lightweight algorithms designed to provide bandwidth estimations under the high constraints of the mobile environment, such as and most notably the usually very limited traffic quota. To do so, we begin with providing a thorough overview of the communication path between a content server and a mobile device. We continue with analysing how accurate smartphone measurements can be and also go in depth identifying the various artifacts adding noise to the fidelity of on device measurements. Then, we first propose a novel lightweight measurement technique that can be used as a basis for advanced resource optimization algorithms to be run on mobile phones. Our main idea leverages an original packet dispersion based technique to estimate per user capacity. This allows passive measurements by just sampling the existing mobile traffic. Our technique is able to efficiently filter outliers introduced by mobile network schedulers and phone hardware. In order to asses and verify our measurement technique, we apply it to a diverse dataset generated by both extensive simulations and a week-long measurement campaign spanning two cities in two countries, different radio technologies, and covering all times of the day. The results demonstrate that our technique is effective even if it is provided only with a small fraction of the exchanged packets of a flow. The only requirement for the input data is that it should consist of a few consecutive packets that are gathered periodically. This makes the measurement algorithm a good candidate for inclusion in OS libraries to allow for advanced resource optimization and application-level traffic scheduling, based on current and predicted future user capacity. We proceed with another algorithm that takes advantage of the traffic generated by short-lived TCP connections, which form the majority of the mobile connections, to passively estimate the currently available bandwidth class. Our algorithm is able to extract useful information even if the TCP connection never exits the slow start phase. To the best of our knowledge, no other solution can operate with such constrained input. Our estimation method is able to achieve good precision despite artifacts introduced by the slow start behavior of TCP, mobile scheduler and phone hardware. We evaluate our solution against traces collected in 4 European countries. Furthermore, the small footprint of our algorithm allows its deployment on resource limited devices. Finally, in an attempt to face the rapid traffic increase, mobile application developers outsource their cloud infrastructure deployment and content delivery to cloud computing services and content delivery networks. Studying how these services, which we collectively denote Cloud Service Providers (CSPs), perform over Mobile Network Operators (MNOs) is crucial to understanding some of the performance limitations of today’s mobile apps. To that end, we perform the first empirical study of the complex dynamics between applications, MNOs and CSPs. First, we use real mobile app traffic traces that we gathered through a global crowdsourcing campaign to identify the most prevalent CSPs supporting today’s mobile Internet. Then, we investigate how well these services interconnect with major European MNOs at a topological level, and measure their performance over European MNO networks through a month-long measurement campaign on the MONROE mobile broadband testbed. We discover that the top 6 most prevalent CSPs are used by 85% of apps, and observe significant differences in their performance across different MNOs due to the nature of their services, peering relationships with MNOs, and deployment strategies. We also find that CSP performance in MNOs is affected by inflated path length, roaming, and presence of middleboxes, but not influenced by the choice of DNS resolver. We also observe that the choice of operator’s Point of Presence (PoP) may inflate by at least 20% the delay towards popular websites.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Ahmed Elmokashfi.- Secretario: Rubén Cuevas Rumín.- Vocal: Paolo Din

    How to Measure TLS, X.509 Certificates, and Web PKI: A Tutorial and Brief Survey

    Full text link
    Transport Layer Security (TLS) is the base for many Internet applications and services to achieve end-to-end security. In this paper, we provide guidance on how to measure TLS deployments, including X.509 certificates and Web PKI. We introduce common data sources and tools, and systematically describe necessary steps to conduct sound measurements and data analysis. By surveying prior TLS measurement studies we find that diverging results are rather rooted in different setups instead of different deployments. To improve the situation, we identify common pitfalls and introduce a framework to describe TLS and Web PKI measurements. Where necessary, our insights are bolstered by a data-driven approach, in which we complement arguments by additional measurements

    ORLease: Optimistically Replicated Lease Using Lease Version Vector For Higher Replica Consistency in Optimistic Replication Systems

    Get PDF
    There is a tradeoff between the availability and consistency properties of any distributed replication system. Optimistic replication favors high availability over strong consistency so that the replication system can support disconnected replicas as well as high network latency between replicas. Optimistic replication improves the availability of these systems by allowing data updates to be committed at their originating replicas first before they are asynchronously replicated out and committed later at the rest of the replicas. This leads the whole system to suffer from a relaxed data consistency. This is due to the lack of any locking mechanism to synchronize access to the replicated data resources in order to mutually exclude one another. When consistency is relaxed, there is a potential of reading from stale data as well as introducing data conflicts due to the concurrent data updates that might have been introduced at different replicas. These issues could be ameliorated if the optimistic replication system is aggressively propagating the data updates at times of good network connectivity between replicas. However, aggressive propagation for data updates does not scale well in write intensive environments and leads to communication overhead in order to keep all replicas in sync. In pursuance of a solution to mitigate the relaxed consistency drawback, a new technique has been developed that improves the consistency of optimistic replication systems without sacrificing its availability and with minimal communication overhead. This new methodology is based on applying the concurrency control technique of leasing in an optimistic way. The optimistic lease technique is built on top of a replication framework that prioritizes metadata replication over data replication. The framework treats the lease requests as replication metadata updates and replicates them aggressively in order to optimistically acquire leases on replicated data resources. The technique is demonstrating a best effort semi-locking semantics that improves the overall system consistency while avoiding any locking issues that could arise in optimistic replication systems

    Calibration and Analysis of Enterprise and Edge Network Measurements

    Get PDF
    With the growth of the Internet over the past several decades, the field of Internet and network measurements has attracted the attention of many researchers. Doing the measurements has allowed a better understanding of the inner workings of both the global Internet and its specific parts. But undertaking a measurement study in a sound fashion is no easy task. Given the complexity of modern networks, one has to take great care in anticipating, detecting and eliminating all the measurement errors and biases. In this thesis we pave the way for a more systematic calibration of network traces. Such calibration ensures the soundness and robustness of the analysis results by revealing and fixing flaws in the data. We collect our measurement data in two environments: in a medium-sized enterprise and at the Internet edge. For the former we perform two rounds of data collection from the enterprise switches. We use the differences in the way we recorded the network traces during the first and second rounds to develop and assess the methodology for five calibration aspects: measurement gain, measurement loss, measurement reordering, timing, and topology. For the dataset gathered at the Internet edge, we perform calibration in the form of extensive checks of data consistency and sanity. After calibrating the data, we engage in the analysis of its various aspects. For the enterprise dataset we look at TCP dynamics in the enterprise environment. Here we first make a high- level overview of TCP connection characteristics such as termination status, size, duration, rate, etc. Then we assess the parameters important for TCP performance, such as retransmissions, out-of-order deliveries and channel utilization. Finally, using the Internet edge dataset, we gauge the performance characteristics of the edge connectivity

    Deteção de ataques de negação de serviços distribuídos na origem

    Get PDF
    From year to year new records of the amount of traffic in an attack are established, which demonstrate not only the constant presence of distributed denialof-service attacks, but also its evolution, demarcating itself from the other network threats. The increasing importance of resource availability alongside the security debate on network devices and infrastructures is continuous, given the preponderant role in both the home and corporate domains. In the face of the constant threat, the latest network security systems have been applying pattern recognition techniques to infer, detect, and react more quickly and assertively. This dissertation proposes methodologies to infer network activities patterns, based on their traffic: follows a behavior previously defined as normal, or if there are deviations that raise suspicions about the normality of the action in the network. It seems that the future of network defense systems continues in this direction, not only by increasing amount of traffic, but also by the diversity of actions, services and entities that reflect different patterns, thus contributing to the detection of anomalous activities on the network. The methodologies propose the collection of metadata, up to the transport layer of the osi model, which will then be processed by the machien learning algorithms in order to classify the underlying action. Intending to contribute beyond denial-of-service attacks and the network domain, the methodologies were described in a generic way, in order to be applied in other scenarios of greater or less complexity. The third chapter presents a proof of concept with attack vectors that marked the history and a few evaluation metrics that allows to compare the different classifiers as to their success rate, given the various activities in the network and inherent dynamics. The various tests show flexibility, speed and accuracy of the various classification algorithms, setting the bar between 90 and 99 percent.De ano para ano são estabelecidos novos recordes de quantidade de tráfego num ataque, que demonstram não só a presença constante de ataques de negação de serviço distribuídos, como também a sua evolução, demarcando-se das outras ameaças de rede. A crescente importância da disponibilidade de recursos a par do debate sobre a segurança nos dispositivos e infraestruturas de rede é contínuo, dado o papel preponderante tanto no dominio doméstico como no corporativo. Face à constante ameaça, os sistemas de segurança de rede mais recentes têm vindo a aplicar técnicas de reconhecimento de padrões para inferir, detetar e reagir de forma mais rápida e assertiva. Esta dissertação propõe metodologias para inferir padrões de atividades na rede, tendo por base o seu tráfego: se segue um comportamento previamente definido como normal, ou se existem desvios que levantam suspeitas sobre normalidade da ação na rede. Tudo indica que o futuro dos sistemas de defesa de rede continuará neste sentido, servindo-se não só do crescente aumento da quantidade de tráfego, como também da diversidade de ações, serviços e entidades que refletem padrões distintos contribuindo assim para a deteção de atividades anómalas na rede. As metodologias propõem a recolha de metadados, até á camada de transporte, que seguidamente serão processados pelos algoritmos de aprendizagem automática com o objectivo de classificar a ação subjacente. Pretendendo que o contributo fosse além dos ataques de negação de serviço e do dominio de rede, as metodologias foram descritas de forma tendencialmente genérica, de forma a serem aplicadas noutros cenários de maior ou menos complexidade. No quarto capítulo é apresentada uma prova de conceito com vetores de ataques que marcaram a história e, algumas métricas de avaliação que permitem comparar os diferentes classificadores quanto à sua taxa de sucesso, face às várias atividades na rede e inerentes dinâmicas. Os vários testes mostram flexibilidade, rapidez e precisão dos vários algoritmos de classificação, estabelecendo a fasquia entre os 90 e os 99 por cento.Mestrado em Engenharia de Computadores e Telemátic
    corecore