11 research outputs found

    DDoS Mitigation:A Measurement-Based Approach

    Get PDF
    Society heavily relies upon the Internet for global communications. Simultaneously, Internet stability and reliability are continuously subject to deliberate threats. These threats include (Distributed) Denial-of-Service (DDoS) attacks, which can potentially be devastating. As a result of DDoS, businesses lose hundreds of millions of dollars annually. Moreover, when it comes to vital infrastructure, national safety and even lives could be at stake. Effective defenses are therefore an absolute necessity. Prospective users of readily available mitigation solutions find themselves having many shapes and sizes to choose from, the right fit of which may, however, not always be apparent. In addition, the deployment and operation of mitigation solutions may come with hidden hazards that need to be better understood. Policy makers and governments also find themselves facing questions concerning what needs to be done to promote cybersafety on a national level. Developing an optimal course of action to deal with DDoS, therefore, also brings about societal challenges. Even though the DDoS problem is by no means new, the scale of the problem is still unclear. We do not know exactly what it is we are defending against and getting a better understanding of attacks is essential to addressing the problem head-on. To advance situational awareness, many technical and societal challenges need still to be tackled. Given the central importance of better understanding the DDoS problem to improve overall Internet security, the thesis that we summarize in this paper has three main contributions. First, we rigorously characterize attacks and attacked targets at scale. Second, we advance knowledge about the Internet-wide adoption, deployment and operational use of various mitigation solutions. Finally, we investigate hidden hazards that can render mitigation solutions altogether ineffective

    DDoS Never Dies? An IXP Perspective on DDoS Amplification Attacks

    Full text link
    DDoS attacks remain a major security threat to the continuous operation of Internet edge infrastructures, web services, and cloud platforms. While a large body of research focuses on DDoS detection and protection, to date we ultimately failed to eradicate DDoS altogether. Yet, the landscape of DDoS attack mechanisms is even evolving, demanding an updated perspective on DDoS attacks in the wild. In this paper, we identify up to 2608 DDoS amplification attacks at a single day by analyzing multiple Tbps of traffic flows at a major IXP with a rich ecosystem of different networks. We observe the prevalence of well-known amplification attack protocols (e.g., NTP, CLDAP), which should no longer exist given the established mitigation strategies. Nevertheless, they pose the largest fraction on DDoS amplification attacks within our observation and we witness the emergence of DDoS attacks using recently discovered amplification protocols (e.g., OpenVPN, ARMS, Ubiquity Discovery Protocol). By analyzing the impact of DDoS on core Internet infrastructure, we show that DDoS can overload backbone-capacity and that filtering approaches in prior work omit 97% of the attack traffic.Comment: To appear at PAM 202

    From the edge to the core : towards informed vantage point selection for internet measurement studies

    Get PDF
    Since the early days of the Internet, measurement scientists are trying to keep up with the fast-paced development of the Internet. As the Internet grew organically over time and without build-in measurability, this process requires many workarounds and due diligence. As a result, every measurement study is only as good as the data it relies on. Moreover, data quality is relative to the research question—a data set suitable to analyze one problem may be insufficient for another. This is entirely expected as the Internet is decentralized, i.e., there is no single observation point from which we can assess the complete state of the Internet. Because of that, every measurement study needs specifically selected vantage points, which fit the research question. In this thesis, we present three different vantage points across the Internet topology— from the edge to the Internet core. We discuss their specific features, suitability for different kinds of research questions, and how to work with the corresponding data. The data sets obtained at the presented vantage points allow us to conduct three different measurement studies and shed light on the following aspects: (a) The prevalence of IP source address spoofing at a large European Internet Exchange Point (IXP), (b) the propagation distance of BGP communities, an optional transitive BGP attribute used for traffic engineering, and (c) the impact of the global COVID-19 pandemic on Internet usage behavior at a large Internet Service Provider (ISP) and three IXPs.Seit den frühen Tagen des Internets versuchen Forscher im Bereich Internet Measu- rement, mit der rasanten Entwicklung des des Internets Schritt zu halten. Da das Internet im Laufe der Zeit organisch gewachsen ist und nicht mit Blick auf Messbar- keit entwickelt wurde, erfordert dieser Prozess eine Meg Workarounds und Sorgfalt. Jede Measurement Studie ist nur so gut wie die Daten, auf die sie sich stützt. Und Datenqualität ist relativ zur Forschungsfrage - ein Datensatz, der für die Analyse eines Problems geeiget ist, kann für ein anderes unzureichend sein. Dies ist durchaus zu erwarten, da das Internet dezentralisiert ist, d. h. es gibt keinen einzigen Be- obachtungspunkt, von dem aus wir den gesamten Zustand des Internets beurteilen können. Aus diesem Grund benötigt jede Measurement Studie gezielt ausgewählte Beobachtungspunkte, die zur Forschungsfrage passen. In dieser Arbeit stellen wir drei verschiedene Beobachtungspunkte vor, die sich über die gsamte Internet-Topologie erstrecken— vom Rand bis zum Kern des Internets. Wir diskutieren ihre spezifischen Eigenschaften, ihre Eignung für verschiedene Klas- sen von Forschungsfragen und den Umgang mit den entsprechenden Daten. Die an den vorgestellten Beobachtungspunkten gewonnenen Datensätze ermöglichen uns die Durchführung von drei verschiedenen Measurement Studien und damit die folgenden Aspekte zu beleuchten: (a) Die Prävalenz von IP Source Address Spoofing bei einem großen europäischen Internet Exchange Point (IXP), (b) die Ausbreitungsdistanz von BGP-Communities, ein optionales transitives BGP-Attribut, das Anwendung im Bereich Traffic-Enigneering findet sowie (c) die Auswirkungen der globalen COVID- 19-Pandemie auf das Internet-Nutzungsverhalten an einem großen Internet Service Provider (ISP) und drei IXPs

    Securing the Internet at the Exchange Points

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores), 2022, Universidade de Lisboa, Faculdade de CiênciasBGP, the border gateway protocol, is the inter-domain routing protocol that glues the Internet. Despite its importance, it has well-known security problems. Frequently, the BGP infrastructure is the target of prefix hijacking and path manipulation attacks. These attacks disrupt the normal functioning of the Internet by either redirecting the traffic, potentially allowing eavesdropping, or even preventing it from reaching its destination altogether, affecting availability. These problems result from the lack of a fundamental security mechanism: the ability to validate the information in routing announcements. Specifically, it does not authenticate the prefix origin nor the validity of the announced routes. This means that an intermediate network that intercepts a BGP announcement can maliciously announce an IP prefix that it does not own as theirs, or insert a bogus path to a prefix with the goal to intercept traffic. Several solutions have been proposed in the past, but they all have limitations, of which the most severe is arguably the requirement to perform drastic changes on the existing BGP infrastructure (i.e., requiring the replacement of existing equipment). In addition, most solutions require their widespread adoption to be effective. Finally, they typically require secure communication channels between the participant routers, which entails computationally-intensive cryptographic verification capabilities that are normally unavailable in this type of equipment. With these challenges in mind, this thesis proposes to investigate the possibility to improve BGP security by leveraging the software-defined networking (SDN) technology that is increasingly common at Internet Exchange Points (IXPs). These interconnection facilities are single locations that typically connect hundreds to thousands of networks, working as Internet “middlemen” ideally placed to implement inter-network mechanisms, such as security, without requiring changes to the network operators’ infrastructure. Our key idea is to include a secure channel between IXPs that, by running in the SDN server that controls these modern infrastructures, avoids the cryptographic requirements in the routers. In our solution, the secure channel for communication implements a distributed ledger (a blockchain), for decentralized trust and its other inherent guarantees. The rationale is that by increasing trust and avoiding expensive infrastructure updates, we hope to create incentives for operators to adhere to these new IXP-enhanced security services

    Internet-Wide Evaluations of Security and Vulnerabilities

    Get PDF
    The Internet significantly impacts the world culture. Since the beginning, it is a multilayered system, which is even gaining more protocols year by year. At its core, remains the Internet protocol suite, where the fundamental protocols such as IPv4, TCP/UDP, DNS are initially introduced. Recently, more and more cross-layer attacks involving features in multiple layers are reported. To better understand these attacks, e.g. how practical they are and how many users are vulnerable, Internet-wide evaluations are essential. In this cumulative thesis, we summarise our findings from various Internet-wide evaluations in recent years, with a main focus on DNS. Our evaluations showed that IP fragmentation poses a practical threat to DNS security, regardless of the transport protocol (TCP or UDP). Defense mechanisms such as DNS Response Rate Limiting could facilitate attacks on DNS, even if they are designed to protect DNS. We also extended the evaluations to a fundamental system which heavily relies on DNS, the web PKI. We found that Certificate Authorities suffered a lot from previous DNS vulnerabilities. We demonstrated that off-path attackers could hijack accounts at major CAs and manipulate resources there, with various DNS cache poisoning attacks. The Domain Validation procedure faces similar vulnerabilities. Even the latest Multiple-Validation-Agent DV could be downgraded and poisoned. On the other side, we also performed Internet-wide evaluations of two important defence mechanisms. One is the cryptographic protocol for DNS security, called DNSSEC. We found that only less than 2% of popular domains were signed, among which about 20% were misconfigured. This is another example showing how poorly deployed defence mechanisms worsen the security. The other is ingress filtering, which stops spoofed traffic from entering a network. We presented the most completed Internet-wide evaluations of ingress filtering, which covered over 90% of all Autonomous Systems. We found that over 80% of them were vulnerable to inbound spoofing

    Machine Learning and Big Data Methodologies for Network Traffic Monitoring

    Get PDF
    Over the past 20 years, the Internet saw an exponential grown of traffic, users, services and applications. Currently, it is estimated that the Internet is used everyday by more than 3.6 billions users, who generate 20 TB of traffic per second. Such a huge amount of data challenge network managers and analysts to understand how the network is performing, how users are accessing resources, how to properly control and manage the infrastructure, and how to detect possible threats. Along with mathematical, statistical, and set theory methodologies machine learning and big data approaches have emerged to build systems that aim at automatically extracting information from the raw data that the network monitoring infrastructures offer. In this thesis I will address different network monitoring solutions, evaluating several methodologies and scenarios. I will show how following a common workflow, it is possible to exploit mathematical, statistical, set theory, and machine learning methodologies to extract meaningful information from the raw data. Particular attention will be given to machine learning and big data methodologies such as DBSCAN, and the Apache Spark big data framework. The results show that despite being able to take advantage of mathematical, statistical, and set theory tools to characterize a problem, machine learning methodologies are very useful to discover hidden information about the raw data. Using DBSCAN clustering algorithm, I will show how to use YouLighter, an unsupervised methodology to group caches serving YouTube traffic into edge-nodes, and latter by using the notion of Pattern Dissimilarity, how to identify changes in their usage over time. By using YouLighter over 10-month long races, I will pinpoint sudden changes in the YouTube edge-nodes usage, changes that also impair the end users’ Quality of Experience. I will also apply DBSCAN in the deployment of SeLINA, a self-tuning tool implemented in the Apache Spark big data framework to autonomously extract knowledge from network traffic measurements. By using SeLINA, I will show how to automatically detect the changes of the YouTube CDN previously highlighted by YouLighter. Along with these machine learning studies, I will show how to use mathematical and set theory methodologies to investigate the browsing habits of Internauts. By using a two weeks dataset, I will show how over this period, the Internauts continue discovering new websites. Moreover, I will show that by using only DNS information to build a profile, it is hard to build a reliable profiler. Instead, by exploiting mathematical and statistical tools, I will show how to characterize Anycast-enabled CDNs (A-CDNs). I will show that A-CDNs are widely used either for stateless and stateful services. That A-CDNs are quite popular, as, more than 50% of web users contact an A-CDN every day. And that, stateful services, can benefit of A-CDNs, since their paths are very stable over time, as demonstrated by the presence of only a few anomalies in their Round Trip Time. Finally, I will conclude by showing how I used BGPStream an open-source software framework for the analysis of both historical and real-time Border Gateway Protocol (BGP) measurement data. By using BGPStream in real-time mode I will show how I detected a Multiple Origin AS (MOAS) event, and how I studies the black-holing community propagation, showing the effect of this community in the network. Then, by using BGPStream in historical mode, and the Apache Spark big data framework over 16 years of data, I will show different results such as the continuous growth of IPv4 prefixes, and the growth of MOAS events over time. All these studies have the aim of showing how monitoring is a fundamental task in different scenarios. In particular, highlighting the importance of machine learning and of big data methodologies

    A First Joint Look at DoS Attacks and BGP Blackholing in the Wild

    Get PDF
    BGP blackholing is an operational countermeasure that builds upon the capabilities of BGP to achieve DoS mitigation. Although empirical evidence of blackholing activities are documented in literature, a clear understanding of how blackholing is used in practice when attacks occur is still missing. This paper presents a first joint look at DoS attacks and BGP blackholing in the wild. We do this on the basis of two complementary data sets of DoS attacks, inferred from a large network telescope and DoS honeypots, and on a data set of blackholing events. All data sets span a period of three years, thus providing a longitudinal overview of operational deployment of blackholing during DoS attacks

    Resilience to DDoS attacks

    Get PDF
    Tese de mestrado, Segurança Informática, 2022, Universidade de Lisboa, Faculdade de CiênciasDistributed Denial-of-Service (DDoS) is one of the most common cyberattack used by malicious actors. It has been evolving over the years, using more complex techniques to increase its attack power and surpass the current defense mechanisms. Due to the existent number of different DDoS attacks and their constant evolution, companies need to be constantly aware of developments in DDoS solutions Additionally, the existence of multiple solutions, also makes it hard for companies to decide which solution best suits the company needs and must be implemented. In order to help these companies, our work focuses in analyzing the existing DDoS solutions, for companies to implement solutions that can lead to the prevention, detection, mitigation, and tolerance of DDoS attacks, with the objective of improving the robustness and resilience of the companies against DDoS attacks. In our work, it is presented and described different DDoS solutions, some need to be purchased and other are open-source or freeware, however these last solutions require more technical expertise by cybersecurity agents. To understand how cybersecurity agents protect their companies against DDoS attacks, nowadays, it was built a questionnaire and sent to multiple cybersecurity agents from different countries and industries. As a result of the study performed about the different DDoS solutions and the information gathered from the questionnaire, it was possible to create a DDoS framework to guide companies in the decisionmaking process of which DDoS solutions best suits their resources and needs, in order to ensure that companies can develop their robustness and resilience to fight DDoS attacks. The proposed framework it is divided in three phases, in which the first and second phase is to understand the company context and the asset that need to be protected. The last phase is where we choose the DDoS solution based on the information gathered in the previous phases. We analyzed and presented for each DDoS solutions, which DDoS attack types they can prevent, detect and/or mitigate

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms
    corecore