64 research outputs found

    Mitigating Stealthy Link Flooding DDoS Attacks Using SDN-Based Moving Target Defense

    Get PDF
    With the increasing diversity and complication of Distributed Denial-of-Service (DDoS) attacks, it has become extremely challenging to design a fully protected network. For instance, recently, a new type of attack called Stealthy Link Flooding Attack (SLFA) has been shown to cause critical network disconnection problems, where the attacker targets the communication links in the surrounding area of a server. The existing defense mechanisms for this type of attack are based on the detection of some unusual traffic patterns; however, this might be too late as some severe damage might already be done. These mechanisms also do not consider countermeasures during the reconnaissance phase of these attacks. Over the last few years, moving target defense (MTD) has received increasing attention from the research community. The idea is based on frequently changing the network configurations to make it much more difficult for the attackers to attack the network. In this dissertation, we investigate several novel frameworks based on MTD to defend against contemporary DDoS attacks. Specifically, we first introduce MTD against the data phase of SLFA, where the bots are sending data packets to target links. In this framework, we mitigate the traffic if the bandwidth of communication links exceeds the given threshold, and experimentally show that our method significantly alleviates the congestion. As a second work, we propose a framework that considers the reconnaissance phase of SLFA, where the attacker strives to discover critical communication links. We create virtual networks to deceive the attacker and provide forensic features. In our third work, we consider the legitimate network reconnaissance requests while keeping the attacker confused. To this end, we integrate cloud technologies as overlay networks to our system. We demonstrate that the developed mechanism preserves the security of the network information with negligible delays. Finally, we address the problem of identifying and potentially engaging with the attacker. We model the interaction between attackers and defenders into a game and derive a defense mechanism based on the equilibria of the game. We show that game-based mechanisms could provide similar protection against SLFAs like the extensive periodic MTD solution with significantly reduced overhead. The frameworks in this dissertation were verified with extensive experiments as well as with the theoretical analysis. The research in this dissertation has yielded several novel defense mechanisms that provide comprehensive protection against SLFA. Besides, we have shown that they can be integrated conveniently and efficiently to the current network infrastructure

    Scalable schemes against Distributed Denial of Service attacks

    Get PDF
    Defense against Distributed Denial of Service (DDoS) attacks is one of the primary concerns on the Internet today. DDoS attacks are difficult to prevent because of the open, interconnected nature of the Internet and its underlying protocols, which can be used in several ways to deny service. Attackers hide their identity by using third parties such as private chat channels on IRC (Internet Relay Chat). They also insert false return IP address, spoofing, in a packet which makes it difficult for the victim to determine the packet\u27s origin. We propose three novel and realistic traceback mechanisms which offer many advantages over the existing schemes. All the three schemes take advantage of the Autonomous System topology and consider the fact that the attacker\u27s packets may traverse through a number of domains under different administrative control. Most of the traceback mechanisms make wrong assumptions that the network details of a company under an administrative control are disclosed to the public. For security reasons, this is not the case most of the times. The proposed schemes overcome this drawback by considering reconstruction at the inter and intra AS levels. Hierarchical Internet Traceback (HIT) and Simple Traceback Mechanism (STM) trace back to an attacker in two phases. In the first phase the attack originating Autonomous System is identified while in the second phase the attacker within an AS is identified. Both the schemes, HIT and STM, allow the victim to trace back to the attackers in a few seconds. Their computational overhead is very low and they scale to large distributed attacks with thousands of attackers. Fast Autonomous System Traceback allows complete attack path reconstruction with few packets. We use traceroute maps of real Internet topologies CAIDA\u27s skitter to simulate DDoS attacks and validate our design

    Accountable infrastructure and its impact on internet security and privacy

    Get PDF
    The Internet infrastructure relies on the correct functioning of the basic underlying protocols, which were designed for functionality. Security and privacy have been added post hoc, mostly by applying cryptographic means to different layers of communication. In the absence of accountability, as a fundamental property, the Internet infrastructure does not have a built-in ability to associate an action with the responsible entity, neither to detect or prevent misbehavior. In this thesis, we study accountability from a few different perspectives. First, we study the need of having accountability in anonymous communication networks as a mechanism that provides repudiation for the proxy nodes by tracing back selected outbound traffic in a provable manner. Second, we design a framework that provides a foundation to support the enforcement of the right to be forgotten law in a scalable and automated manner. The framework provides a technical mean for the users to prove their eligibility for content removal from the search results. Third, we analyze the Internet infrastructure determining potential security risks and threats imposed by dependencies among the entities on the Internet. Finally, we evaluate the feasibility of using hop count filtering as a mechanism for mitigating Distributed Reflective Denial-of-Service attacks, and conceptually show that it cannot work to prevent these attacks.Die Internet-Infrastrutur stützt sich auf die korrekte Ausführung zugrundeliegender Protokolle, welche mit Fokus auf Funktionalität entwickelt wurden. Sicherheit und Datenschutz wurden nachträglich hinzugefügt, hauptsächlich durch die Anwendung kryptografischer Methoden in verschiedenen Schichten des Protokollstacks. Fehlende Zurechenbarkeit, eine fundamentale Eigenschaft Handlungen mit deren Verantwortlichen in Verbindung zu bringen, verhindert jedoch, Fehlverhalten zu erkennen und zu unterbinden. Diese Dissertation betrachtet die Zurechenbarkeit im Internet aus verschiedenen Blickwinkeln. Zuerst untersuchen wir die Notwendigkeit für Zurechenbarkeit in anonymisierten Kommunikationsnetzen um es Proxyknoten zu erlauben Fehlverhalten beweisbar auf den eigentlichen Verursacher zurückzuverfolgen. Zweitens entwerfen wir ein Framework, das die skalierbare und automatisierte Umsetzung des Rechts auf Vergessenwerden unterstützt. Unser Framework bietet Benutzern die technische Möglichkeit, ihre Berechtigung für die Entfernung von Suchergebnissen nachzuweisen. Drittens analysieren wir die Internet-Infrastruktur, um mögliche Sicherheitsrisiken und Bedrohungen aufgrund von Abhängigkeiten zwischen den verschiedenen beteiligten Entitäten zu bestimmen. Letztlich evaluieren wir die Umsetzbarkeit von Hop Count Filtering als ein Instrument DRDoS Angriffe abzuschwächen und wir zeigen, dass dieses Instrument diese Art der Angriffe konzeptionell nicht verhindern kann

    An architectural approach for mitigating next-generation denial of service attacks

    Get PDF
    It is well known that distributed denial of service attacks are a major threat to the Internet today. Surveys of network operators repeatedly show that the Internet's stakeholders are concerned, and the reasons for this are clear: the frequency, magnitude, and complexity of attacks are growing, and show no signs of slowing down. With the emergence of the Internet of Things, fifth-generation mobile networks, and IPv6, the Internet may soon be exposed to a new generation of sophisticated and powerful DDoS attacks. But how did we get here? In one view, the potency of DDoS attacks is owed to a set of underlying architectural issues at the heart of the Internet. Guiding principles such as simplicity, openness, and autonomy have driven the Internet to be tremendously successful, but have the side effects of making it difficult to verify source addresses, classify unwanted packets, and forge cooperation between networks to stop traffic. These architectural issues make mitigating DDoS attacks a costly, uphill battle for victims, who have been left without an adequate defense. Such a circumstance requires a solution that is aware of, and addresses, the architectural issues at play. Fueled by over 20 years worth of lessons learned from the industry and academic literature, Gatekeeper is a mitigation system that neutralizes the issues that make DDoS attacks so powerful. It does so by enforcing a connection-oriented network layer and by leveraging a global distribution of upstream vantage points. Gatekeeper further distinguishes itself from previous solutions because it circumvents the necessity of mutual deployment between networks, allowing deployers to reap the full benefits alone and on day one. Gatekeeper is an open-source, production-quality DDoS mitigation system. It is modular, scalable, and built using the latest advances in packet processing techniques. It implements the operational features required by today's network administrators, including support for bonded network devices, VLAN tagging, and control plane tools, and has been chosen for deployment by multiple networks. However, an effective Gatekeeper deployment can only be achieved by writing and enforcing fine-grained and accurate network policies. While the basic function of such policies is to simply govern the sending ability of clients, Gatekeeper is capable of much more: multiple bandwidth limits, punishing flows for misbehavior, attack detection via machine learning, and the flexibility to support new protocols. Therefore, we provide a view into the richness and power of Gatekeeper policies in the form of a policy toolkit for network operators. Finally, we must look to the future, and prepare for a potential next generation of powerful and costly DDoS attacks to grace our infrastructure. In particular, link flooding attacks such as Crossfire use massive, distributed sets of bots with low-rate, legitimate-looking traffic to attack upstream links outside of the victim's control. A new generation of these attacks could soon be realized as IoT devices, 5G networks, and IPv6 simultaneously enter the network landscape. Gatekeeper is able to hinder the architectural advantages that fuel link flooding attacks, bounding their effectiveness

    Defense Against Spoofed IP Traffic Using Hop-Count Filtering

    Get PDF
    IP spoofing has often been exploited by Distributed Denial of Service (DDoS) attacks to: 1)conceal flooding sources and dilute localities in flooding traffic, and 2)coax legitimate hosts into becoming reflectors, redirecting and amplifying flooding traffic. Thus, the ability to filter spoofed IP packets near victim servers is essential to their own protection and prevention of becoming involuntary DoS reflectors. Although an attacker can forge any field in the IP header, he cannot falsify the number of hops an IP packet takes to reach its destination. More importantly, since the hop-count values are diverse, an attacker cannot randomly spoof IP addresses while maintaining consistent hop-counts. On the other hand, an Internet server can easily infer the hop-count information from the Time-to-Live (TTL) field of the IP header. Using a mapping between IP addresses and their hop-counts, the server can distinguish spoofed IP packets from legitimate ones. Based on this observation, we present a novel filtering technique, called Hop-Count Filtering (HCF)-which builds an accurate IP-to-hop-count (IP2HC) mapping table-to detect and discard spoofed IP packets. HCF is easy to deploy, as it does not require any support from the underlying network. Through analysis using network measurement data, we show that HCF can identify close to 90% of spoofed IP packets, and then discard them with little collateral damage. We implement and evaluate HCF in the Linux kernel, demonstrating its effectiveness with experimental measurements

    An approach in identifying and tracing back spoofed IP packets to their sources

    Get PDF
    With internet expanding in every aspect of businesses infrastructure, it becomes more and more important to make these businesses infrastructures safe and secure to the numerous attacks perpetrated on them conspicuously when it comes to denial of service (DoS) attacks. A Dos attack can be summarized as an effort carried out by either a person or a group of individual to suppress a particular outline service. This can hence be achieved by using and manipulating packets which are sent out using the IP protocol included into the IP address of the sending party. However, one of the major drawbacks is that the IP protocol is not able to verify the accuracy of the address and has got no method to validate the authenticity of the sender’s packet. Knowing how this works, an attacker can hence fabricate any source address to gain unauthorized access to critical information. In the event that attackers can manipulate this lacking for numerous targeted attacks, it would be wise and safe to determine whether the network traffic has got spoofed packets and how to traceback. IP traceback has been quite active specially with the DOS attacks therefore this paper will be focusing on the different types of attacks involving spoofed packets and also numerous methods that can help in identifying whether packet have spoofed source addresses based on both active and passive host based methods and on the router-based methods

    Improving end-to-end availability using overlay networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (p. 139-150).The end-to-end availability of Internet services is between two and three orders of magnitude worse than other important engineered systems, including the US airline system, the 911 emergency response system, and the US public telephone system. This dissertation explores three systems designed to mask Internet failures, and, through a study of three years of data collected on a 31-site testbed, why these failures happen and how effectively they can be masked. A core aspect of many of the failures that interrupt end-to-end communication is that they fall outside the expected domain of well-behaved network failures. Many traditional techniques cope with link and router failures; as a result, the remaining failures are those caused by software and hardware bugs, misconfiguration, malice, or the inability of current routing systems to cope with persistent congestion.The effects of these failures are exacerbated because Internet services depend upon the proper functioning of many components-wide-area routing, access links, the domain name system, and the servers themselves-and a failure in any of them can prove disastrous to the proper functioning of the service. This dissertation describes three complementary systems to increase Internet availability in the face of such failures. Each system builds upon the idea of an overlay network, a network created dynamically between a group of cooperating Internet hosts. The first two systems, Resilient Overlay Networks (RON) and Multi-homed Overlay Networks (MONET) determine whether the Internet path between two hosts is working on an end-to-end basis. Both systems exploit the considerable redundancy available in the underlying Internet to find failure-disjoint paths between nodes, and forward traffic along a working path. RON is able to avoid 50% of the Internet outages that interrupt communication between a small group of communicating nodes.MONET is more aggressive, combining an overlay network of Web proxies with explicitly engineered redundant links to the Internet to also mask client access link failures. Eighteen months of measurements from a six-site deployment of MONET show that it increases a client's ability to access working Web sites by nearly an order of magnitude. Where RON and MONET combat accidental failures, the Mayday system guards against denial- of-service attacks by surrounding a vulnerable Internet server with a ring of filtering routers. Mayday then uses a set of overlay nodes to act as mediators between the service and its clients, permitting only properly authenticated traffic to reach the server.by David Godbe Andersen.Ph.D

    Effective Wide-Area Network Performance Monitoring and Diagnosis from End Systems.

    Full text link
    The quality of all network application services running on today’s Internet heavily depends on the performance assurance offered by the Internet Service Providers (ISPs). Large network providers inside the core of the Internet are instrumental in determining the network properties of their transit services due to their wide-area coverage, especially in the presence of the increasingly deployed real-time sensitive network applications. The end-to-end performance of distributed applications and network services are susceptible to network disruptions in ISP networks. Given the scale and complexity of the Internet, failures and performance problems can occur in different ISP networks. It is important to efficiently identify and proactively respond to potential problems to prevent large damage. Existing work to monitor and diagnose network disruptions are ISP-centric, which relying on each ISP to set up monitors and diagnose within its network. This approach is limited as ISPs are unwilling to revealing such data to the public. My dissertation research developed a light-weight active monitoring system to monitor, diagnose and react to network disruptions by purely using end hosts, which can help customers assess the compliance of their service-level agreements (SLAs). This thesis studies research problems from three indispensable aspects: efficient monitoring, accurate diagnosis, and effective mitigation. This is an essential step towards accountability and fairness on the Internet. To fully understand the limitation of relying on ISP data, this thesis first studies and demonstrates the monitor selection’s great impact on the monitoring quality and the interpretation of the results. Motivated by the limitation of ISP-centric approach, this thesis demonstrates two techniques to diagnose two types of finegrained causes accurately and scalably by exploring information across routing and data planes, as well as sharing information among multiple locations collaboratively. Finally, we demonstrate usefulness of the monitoring and diagnosis results with two mitigation applications. The first application is short-term prevention of avoiding choosing the problematic route by exploring the predictability from history. The second application is to scalably compare multiple ISPs across four important performance metrics, namely reachability, loss rate, latency, and path diversity completely from end systems without any ISP cooperation.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64770/1/wingying_1.pd
    • …
    corecore