160 research outputs found

    Stellar: Network Attack Mitigation using Advanced Blackholing

    Get PDF
    © ACM 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the 14th International Conference on Emerging Networking EXperiments and Technologies - CoNEXT ’18, http://dx.doi.org/10.1145/3281411.3281413.Network attacks, including Distributed Denial-of-Service (DDoS), continuously increase in terms of bandwidth along with damage (recent attacks exceed 1.7 Tbps) and have a devastating impact on the targeted companies/governments. Over the years, mitigation techniques, ranging from blackholing to policy-based filtering at routers, and on to traffic scrubbing, have been added to the network operator’s toolbox. Even though these mitigation techniques pro- vide some protection, they either yield severe collateral damage, e.g., dropping legitimate traffic (blackholing), are cost-intensive, or do not scale well for Tbps level attacks (ACL filltering, traffic scrubbing), or require cooperation and sharing of resources (Flowspec). In this paper, we propose Advanced Blackholing and its system realization Stellar. Advanced blackholing builds upon the scalability of blackholing while limiting collateral damage by increasing its granularity. Moreover, Stellar reduces the required level of cooperation to enhance mitigation effectiveness. We show that fine-grained blackholing can be realized, e.g., at a major IXP, by combining available hardware filters with novel signaling mechanisms. We evaluate the scalability and performance of Stellar at a large IXP that interconnects more than 800 networks, exchanges more than 6 Tbps tra c, and witnesses many network attacks every day. Our results show that network attacks, e.g., DDoS amplification attacks, can be successfully mitigated while the networks and services under attack continue to operate untroubled.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNetDFG, FE 570/4-1, Gottfried Wilhelm Leibniz-Preis 201

    Minimal deployable endpoint-driven network forwarding: principle, designs and applications

    Get PDF
    Networked systems now have significant impact on human lives: the Internet, connecting the world globally, is the foundation of our information age, the data centers, running hundreds of thousands of servers, drive the era of cloud computing, and even the Tor project, a networked system providing online anonymity, now serves millions of daily users. Guided by the end-to-end principle, many computer networks have been designed with a simple and flexible core offering general data transfer service, whereas the bulk of the application-level functionalities have been implemented on endpoints that are attached to the edge of the network. Although the end-to-end design principle gives these networked systems tremendous success, a number of new requirements have emerged for computer networks and their running applications, including untrustworthy of endpoints, privacy requirement of endpoints, more demanding applications, the rise of third-party Intermediaries and the asymmetric capability of endpoints and so on. These emerging requirements have created various challenges in different networked systems. To address these challenges, there are no obvious solutions without adding in-network functions to the network core. However, no design principle has ever been proposed for guiding the implementation of in-network functions. In this thesis, We propose the first such principle and apply this principle to propose four designs in three different networked systems to address four separate challenges. We demonstrate through detailed implementation and extensive evaluations that the proposed principle can live in harmony with the end-to-end principle, and a combination of the two principle offers more complete, effective and accurate guides for innovating the modern computer networks and their applications.Ope

    Leveraging Conventional Internet Routing Protocol Behavior to Defeat DDoS and Adverse Networking Conditions

    Get PDF
    The Internet is a cornerstone of modern society. Yet increasingly devastating attacks against the Internet threaten to undermine the Internet\u27s success at connecting the unconnected. Of all the adversarial campaigns waged against the Internet and the organizations that rely on it, distributed denial of service, or DDoS, tops the list of the most volatile attacks. In recent years, DDoS attacks have been responsible for large swaths of the Internet blacking out, while other attacks have completely overwhelmed key Internet services and websites. Core to the Internet\u27s functionality is the way in which traffic on the Internet gets from one destination to another. The set of rules, or protocol, that defines the way traffic travels the Internet is known as the Border Gateway Protocol, or BGP, the de facto routing protocol on the Internet. Advanced adversaries often target the most used portions of the Internet by flooding the routes benign traffic takes with malicious traffic designed to cause widespread traffic loss to targeted end users and regions. This dissertation focuses on examining the following thesis statement. Rather than seek to redefine the way the Internet works to combat advanced DDoS attacks, we can leverage conventional Internet routing behavior to mitigate modern distributed denial of service attacks. The research in this work breaks down into a single arc with three independent, but connected thrusts, which demonstrate that the aforementioned thesis is possible, practical, and useful. The first thrust demonstrates that this thesis is possible by building and evaluating Nyx, a system that can protect Internet networks from DDoS using BGP, without an Internet redesign and without cooperation from other networks. This work reveals that Nyx is effective in simulation for protecting Internet networks and end users from the impact of devastating DDoS. The second thrust examines the real-world practicality of Nyx, as well as other systems which rely on real-world BGP behavior. Through a comprehensive set of real-world Internet routing experiments, this second thrust confirms that Nyx works effectively in practice beyond simulation as well as revealing novel insights about the effectiveness of other Internet security defensive and offensive systems. We then follow these experiments by re-evaluating Nyx under the real-world routing constraints we discovered. The third thrust explores the usefulness of Nyx for mitigating DDoS against a crucial industry sector, power generation, by exposing the latent vulnerability of the U.S. power grid to DDoS and how a system such as Nyx can protect electric power utilities. This final thrust finds that the current set of exposed U.S. power facilities are widely vulnerable to DDoS that could induce blackouts, and that Nyx can be leveraged to reduce the impact of these targeted DDoS attacks

    From the edge to the core : towards informed vantage point selection for internet measurement studies

    Get PDF
    Since the early days of the Internet, measurement scientists are trying to keep up with the fast-paced development of the Internet. As the Internet grew organically over time and without build-in measurability, this process requires many workarounds and due diligence. As a result, every measurement study is only as good as the data it relies on. Moreover, data quality is relative to the research question—a data set suitable to analyze one problem may be insufficient for another. This is entirely expected as the Internet is decentralized, i.e., there is no single observation point from which we can assess the complete state of the Internet. Because of that, every measurement study needs specifically selected vantage points, which fit the research question. In this thesis, we present three different vantage points across the Internet topology— from the edge to the Internet core. We discuss their specific features, suitability for different kinds of research questions, and how to work with the corresponding data. The data sets obtained at the presented vantage points allow us to conduct three different measurement studies and shed light on the following aspects: (a) The prevalence of IP source address spoofing at a large European Internet Exchange Point (IXP), (b) the propagation distance of BGP communities, an optional transitive BGP attribute used for traffic engineering, and (c) the impact of the global COVID-19 pandemic on Internet usage behavior at a large Internet Service Provider (ISP) and three IXPs.Seit den frühen Tagen des Internets versuchen Forscher im Bereich Internet Measu- rement, mit der rasanten Entwicklung des des Internets Schritt zu halten. Da das Internet im Laufe der Zeit organisch gewachsen ist und nicht mit Blick auf Messbar- keit entwickelt wurde, erfordert dieser Prozess eine Meg Workarounds und Sorgfalt. Jede Measurement Studie ist nur so gut wie die Daten, auf die sie sich stützt. Und Datenqualität ist relativ zur Forschungsfrage - ein Datensatz, der für die Analyse eines Problems geeiget ist, kann für ein anderes unzureichend sein. Dies ist durchaus zu erwarten, da das Internet dezentralisiert ist, d. h. es gibt keinen einzigen Be- obachtungspunkt, von dem aus wir den gesamten Zustand des Internets beurteilen können. Aus diesem Grund benötigt jede Measurement Studie gezielt ausgewählte Beobachtungspunkte, die zur Forschungsfrage passen. In dieser Arbeit stellen wir drei verschiedene Beobachtungspunkte vor, die sich über die gsamte Internet-Topologie erstrecken— vom Rand bis zum Kern des Internets. Wir diskutieren ihre spezifischen Eigenschaften, ihre Eignung für verschiedene Klas- sen von Forschungsfragen und den Umgang mit den entsprechenden Daten. Die an den vorgestellten Beobachtungspunkten gewonnenen Datensätze ermöglichen uns die Durchführung von drei verschiedenen Measurement Studien und damit die folgenden Aspekte zu beleuchten: (a) Die Prävalenz von IP Source Address Spoofing bei einem großen europäischen Internet Exchange Point (IXP), (b) die Ausbreitungsdistanz von BGP-Communities, ein optionales transitives BGP-Attribut, das Anwendung im Bereich Traffic-Enigneering findet sowie (c) die Auswirkungen der globalen COVID- 19-Pandemie auf das Internet-Nutzungsverhalten an einem großen Internet Service Provider (ISP) und drei IXPs

    Achieving network resiliency using sound theoretical and practical methods

    Get PDF
    Computer networks have revolutionized the life of every citizen in our modern intercon- nected society. The impact of networked systems spans every aspect of our lives, from financial transactions to healthcare and critical services, making these systems an attractive target for malicious entities that aim to make financial or political profit. Specifically, the past decade has witnessed an astounding increase in the number and complexity of sophisti- cated and targeted attacks, known as advanced persistent threats (APT). Those attacks led to a paradigm shift in the security and reliability communities’ perspective on system design; researchers and government agencies accepted the inevitability of incidents and malicious attacks, and marshaled their efforts into the design of resilient systems. Rather than focusing solely on preventing failures and attacks, resilient systems are able to maintain an acceptable level of operation in the presence of such incidents, and then recover gracefully into normal operation. Alongside prevention, resilient system design focuses on incident detection as well as timely response. Unfortunately, the resiliency efforts of research and industry experts have been hindered by an apparent schism between theory and practice, which allows attackers to maintain the upper hand advantage. This lack of compatibility between the theory and practice of system design is attributed to the following challenges. First, theoreticians often make impractical and unjustifiable assumptions that allow for mathematical tractability while sacrificing accuracy. Second, the security and reliability communities often lack clear definitions of success criteria when comparing different system models and designs. Third, system designers often make implicit or unstated assumptions to favor practicality and ease of design. Finally, resilient systems are tested in private and isolated environments where validation and reproducibility of the results are not publicly accessible. In this thesis, we set about showing that the proper synergy between theoretical anal- ysis and practical design can enhance the resiliency of networked systems. We illustrate the benefits of this synergy by presenting resiliency approaches that target the inter- and intra-networking levels. At the inter-networking level, we present CPuzzle as a means to protect the transport control protocol (TCP) connection establishment channel from state- exhaustion distributed denial of service attacks (DDoS). CPuzzle leverages client puzzles to limit the rate at which misbehaving users can establish TCP connections. We modeled the problem of determining the puzzle difficulty as a Stackleberg game and solve for the equilibrium strategy that balances the users’ utilizes against CPuzzle’s resilience capabilities. Furthermore, to handle volumetric DDoS attacks, we extend CPuzzle and implement Midgard, a cooperative approach that involves end-users in the process of tolerating and neutralizing DDoS attacks. Midgard is a middlebox that resides at the edge of an Internet service provider’s network and uses client puzzles at the IP level to allocate bandwidth to its users. At the intra-networking level, we present sShield, a game-theoretic network response engine that manipulates a network’s connectivity in response to an attacker who is moving laterally to compromise a high-value asset. To implement such decision making algorithms, we leverage the recent advances in software-defined networking (SDN) to collect logs and security alerts about the network and implement response actions. However, the programma- bility offered by SDN comes with an increased chance for design-time bugs that can have drastic consequences on the reliability and security of a networked system. We therefore introduce BiFrost, an open-source tool that aims to verify safety and security proper- ties about data-plane programs. BiFrost translates data-plane programs into functionally equivalent sequential circuits, and then uses well-established hardware reduction, abstrac- tion, and verification techniques to establish correctness proofs about data-plane programs. By focusing on those four key efforts, CPuzzle, Midgard, sShield, and BiFrost, we believe that this work illustrates the benefits that the synergy between theory and practice can bring into the world of resilient system design. This thesis is an attempt to pave the way for further cooperation and coordination between theoreticians and practitioners, in the hope of designing resilient networked systems

    {SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment

    No full text
    Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users

    {SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment

    No full text
    Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users

    A survey of defense mechanisms against distributed denial of service (DDOS) flooding attacks

    Get PDF
    Distributed Denial of Service (DDoS) flooding attacks are one of the biggest concerns for security professionals. DDoS flooding attacks are typically explicit attempts to disrupt legitimate users' access to services. Attackers usually gain access to a large number of computers by exploiting their vulnerabilities to set up attack armies (i.e., Botnets). Once an attack army has been set up, an attacker can invoke a coordinated, large-scale attack against one or more targets. Developing a comprehensive defense mechanism against identified and anticipated DDoS flooding attacks is a desired goal of the intrusion detection and prevention research community. However, the development of such a mechanism requires a comprehensive understanding of the problem and the techniques that have been used thus far in preventing, detecting, and responding to various DDoS flooding attacks. In this paper, we explore the scope of the DDoS flooding attack problem and attempts to combat it. We categorize the DDoS flooding attacks and classify existing countermeasures based on where and when they prevent, detect, and respond to the DDoS flooding attacks. Moreover, we highlight the need for a comprehensive distributed and collaborative defense approach. Our primary intention for this work is to stimulate the research community into developing creative, effective, efficient, and comprehensive prevention, detection, and response mechanisms that address the DDoS flooding problem before, during and after an actual attack. © 1998-2012 IEEE

    Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art

    Full text link
    Software-Defined Networking (SDN) is an evolutionary networking paradigm which has been adopted by large network and cloud providers, among which are Tech Giants. However, embracing a new and futuristic paradigm as an alternative to well-established and mature legacy networking paradigm requires a lot of time along with considerable financial resources and technical expertise. Consequently, many enterprises can not afford it. A compromise solution then is a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN functionalities are leveraged while existing traditional network infrastructures are acknowledged. Recently, hSDN has been seen as a viable networking solution for a diverse range of businesses and organizations. Accordingly, the body of literature on hSDN research has improved remarkably. On this account, we present this paper as a comprehensive state-of-the-art survey which expands upon hSDN from many different perspectives

    High-performance software packet processing

    Full text link
    In today’s Internet, it is highly desirable to have fast and scalable software packet processing solutions for network applications that run on commodity hardware. The advent of cloud computing drives the continued rapid growth of Internet traffic. Moreover, the development of emerging networking techniques, such as Network Function Virtualization, significantly shapes the need for implementing the network functions in software. Finally, with the advancement of modern platforms as well as software frameworks for packet processing, network applications have potential to process 100+ Gbps network traffic on a single commodity server. Representative frameworks include the Click modular router, the RouteBricks scalable routing architecture, and BUFFALO, the software-based Ethernet switch. Beneath this general-purpose routing and switching functionality lie a broad set of network applications, many of which are handled with custom methods to provide cost-effectiveness and flexibility. This thesis considers two long-standing networking applications, IP lookup and distributed denial-of-service (DDoS) mitigation, and proposes efficient software-based methods drawing from this new perspective. In this thesis, we first introduce several optimization techniques to accelerate network applications by taking advantage of modern CPU features. Then, we explore the IP lookup problem to find the longest matching prefix of an IP address in a set of prefixes. An ideal IP lookup algorithm should achieve small constant IP lookup time, and on-chip memory usage. However, no prior IP lookup algorithm achieves both requirements at the same time. We propose SAIL, a splitting approach to IP lookup, and a suite of algorithms for IP lookup based on SAIL framework. We conducted extensive experiments to evaluate our algorithms, and experimental results show that our SAIL algorithms are much faster than well-known IP lookup algorithms. Next, we switch our focus to DDoS, an attempt to disrupt the legitimate traffic of a victim by sending a flood of Internet traffic from different sources. Our solution is Gatekeeper, the first open-source and deployable DDoS mitigation system. We present a series of optimization techniques, including use of modern platforms, group prefetching, coroutines, and hashing, to accelerate Gatekeeper. Experimental results show that these optimization techniques significantly improve its performance over alternative baseline solutions.2022-01-30T00:00:00
    • …
    corecore