16 research outputs found

    Scalable schemes against Distributed Denial of Service attacks

    Get PDF
    Defense against Distributed Denial of Service (DDoS) attacks is one of the primary concerns on the Internet today. DDoS attacks are difficult to prevent because of the open, interconnected nature of the Internet and its underlying protocols, which can be used in several ways to deny service. Attackers hide their identity by using third parties such as private chat channels on IRC (Internet Relay Chat). They also insert false return IP address, spoofing, in a packet which makes it difficult for the victim to determine the packet\u27s origin. We propose three novel and realistic traceback mechanisms which offer many advantages over the existing schemes. All the three schemes take advantage of the Autonomous System topology and consider the fact that the attacker\u27s packets may traverse through a number of domains under different administrative control. Most of the traceback mechanisms make wrong assumptions that the network details of a company under an administrative control are disclosed to the public. For security reasons, this is not the case most of the times. The proposed schemes overcome this drawback by considering reconstruction at the inter and intra AS levels. Hierarchical Internet Traceback (HIT) and Simple Traceback Mechanism (STM) trace back to an attacker in two phases. In the first phase the attack originating Autonomous System is identified while in the second phase the attacker within an AS is identified. Both the schemes, HIT and STM, allow the victim to trace back to the attackers in a few seconds. Their computational overhead is very low and they scale to large distributed attacks with thousands of attackers. Fast Autonomous System Traceback allows complete attack path reconstruction with few packets. We use traceroute maps of real Internet topologies CAIDA\u27s skitter to simulate DDoS attacks and validate our design

    Stateful Anycast for DDoS Mitigation

    Get PDF
    MEng thesisDistributed denial-of-service (DDoS) attacks can easily cripple victim hosts or networks, yet effective defenses remain elusive. Normal anycast can be used to force the diffusion of attack traffic over a group of several hosts to increase the difficulty of saturating resources at or near any one of the hosts. However, because a packet sent to the anycast group may be delivered to any member, anycast does not support protocols that require a group member to maintain state (such as TCP). This makes anycast impractical for most applications of interest.This document describes the design of Stateful Anycast, a conceptual anycast-like network service based on IP anycast. Stateful Anycast is designed to support stateful sessions without losing anycasts ability to defend against DDoS attacks. Stateful Anycast employs a set of anycasted proxies to direct packets to the proper stateholder. These proxies provide DDoS protection by dropping a sessions packets upon group member request. Stateful Anycast is incrementally deployable and can scale to support many groups

    Proactive techniques for correct and predictable Internet routing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 185-193).The Internet is composed of thousands of autonomous, competing networks that exchange reachability information using an interdomain routing protocol. Network operators must continually reconfigure the routing protocols to realize various economic and performance goals. Unfortunately, there is no systematic way to predict how the configuration will affect the behavior of the routing protocol or to determine whether the routing protocol will operate correctly at all. This dissertation develops techniques to reason about the dynamic behavior of Internet routing, based on static analysis of the router configurations, before the protocol ever runs on a live network. Interdomain routing offers each independent network tremendous flexibility in configuring the routing protocols to accomplish various economic and performance tasks. Routing configurations are complex, and writing them is similar to writing a distributed program; the (unavoidable) consequence of configuration complexity is the potential for incorrect and unpredictable behavior. These mistakes and unintended interactions lead to routing faults, which disrupt end-to-end connectivity. Network operators writing configurations make mistakes; they may also specify policies that interact in unexpected ways with policies in other networks.(cont.) To avoid disrupting network connectivity and degrading performance, operators would benefit from being able to determine the effects of configuration changes before deploying them on a live network; unfortunately, the status quo provides them no opportunity to do so. This dissertation develops the techniques to achieve this goal of proactively ensuring correct and predictable Internet routing. The first challenge in guaranteeing correct and predictable behavior from a routing protocol is defining a specification for correct behavior. We identify three important aspects of correctness-path visibility, route validity, and safety-and develop proactive techniques for guaranteeing that these properties hold. Path visibility states that the protocol disseminates information about paths in the topology; route validity says that this information actually corresponds to those paths; safety says that the protocol ultimately converges to a stable outcome, implying that routing updates actually correspond to topological changes. Armed with this correctness specification, we tackle the second challenge: analyzing routing protocol configurations that may be distributed across hundreds of routers.(cont.) We develop techniques to check whether a routing protocol satisfies the correctness specification within a single independently operated network. We find that much of the specification can be checked with static configuration analysis alone. We present examples of real-world routing faults and propose a systematic framework to classify, detect, correct, and prevent them. We describe the design and implementation of rcc ("router configuration checker"), a tool that uses static configuration analysis to enable network operators to debug configurations before deploying them in an operational network. We have used rcc to detect faults in 17 different networks, including several nationwide Internet service providers (ISPs). To date, rcc has been downloaded by over seventy network operators. A critical aspect of guaranteeing correct and predictable Internet routing is ensuring that the interactions of the configurations across multiple networks do not violate the correctness specification. Guaranteeing safety is challenging because each network sets its policies independently, and these policies may conflict. Using a formal model of today's Internet routing protocol, we derive conditions to guarantee that unintended policy interactions will never cause the routing protocol to oscillate.(cont.) This dissertation also takes steps to make Internet routing more predictable. We present algorithms that help network operators predict how a set of distributed router configurations within a single network will affect the flow of traffic through that network. We describe a tool based on these algorithms that exploits the unique characteristics of routing data to reduce computational overhead. Using data from a large ISP, we show that this tool correctly computes BGP routing decisions and has a running time that is acceptable for many tasks, such as traffic engineering and capacity planning.by Nicholas Greer Feamster.Ph.D

    Automating the Discovery of Censorship Evasion Strategies

    Get PDF
    Censoring nation-states deploy complex network infrastructure to regulate what content citizens can access, and such restrictions to open sharing of information threaten the freedoms of billions of users worldwide, especially marginalized groups. Researchers and censoring regimes have long engaged in a cat-and-mouse game, leading to increasingly sophisticated Internet-scale censorship techniques and methods to evade them. In this dissertation, I study the technology that underpins this Internet censorship: middleboxes (e.g. firewalls). I argue the following thesis: It is possible to automatically discover packet sequence modifications that render deployed censorship middleboxes ineffective across multiple application-layer protocols. To evaluate this thesis, I develop Geneva, a novel genetic algorithm that discovers packet-manipulation-based censorship evasion strategies automatically against nation-state level censors. Training directly against a live adversary, Geneva com- poses, mutates, and evolves sophisticated strategies out of four basic packet manipulation primitives (drop, tamper, duplicate, and fragment). I show that Geneva can be effective across different application layer protocols (HTTP, HTTPS+SNI, HTTPS+ESNI, DNS, SMTP, FTP), censoring regimes (China, Iran, India, and Kazakhstan), and deployment contexts (client-side, server- side), even in cases where multiple middleboxes work in parallel to perform censorship. In total, I present 112 client-side strategies (85 of which work by modifying application layer data), and the first ever server-side strategies (11 in total). Finally, I use Geneva to discover two novel attacks that show censoring middleboxes can be weaponized to launch attacks against innocent hosts anywhere on the Internet. Collectively, my work shows that censorship evasion can be automated and that censorship infrastructures pose a greater threat to Internet availability than previously understood

    Who Governs the Internet? The Emerging Policies, Institutions, and Governance of Cyberspace

    Full text link
    There remains a widespread perception among both the public and elements of academia that the Internet is ungovernable . However, this idea, as well as the notion that the Internet has become some type of cyber-libertarian utopia, is wholly inaccurate. Governments may certainly encounter tremendous difficulty in attempting to regulate the Internet, but numerous architectures of control have nevertheless become pervasive. So who, then, governs the Internet? Our contentions are that the Internet is, in fact, being governed; that it is being governed by specific and identifiable networks of policy actors; and that an argument can be made as to how it is being governed. This project will develop a new conceptual framework for analysis that deconstructs the Internet into four policy layers with the aim of formulating a new political architecture that accurately maps out and depicts authority on the Internet by identifying who has demonstrable policymaking authority that constrains or enables behavior with intentional effects. We will then assess this four-layer model and its resulting map of political architecture by performing a detailed case study of U.S. national cybersecurity policy, post-9/11. Ultimately, we will seek to determine the consequences of these political arrangements and governance policies

    Governance of Online Intermediaries: Observations from a Series of National Case Studies

    Get PDF
    Online intermediaries in various forms – including search engines, social media, or app platforms – play a constitutive role in today’s digital environment. They have become a new type of powerful institution in the 21st century that shape the public networked sphere, and are subject to intense and often controversial policy debates. This paper focuses on one particular force shaping the emergence and future evolution of online intermediaries: the rapidly changing landscape of intermediary governance at the intersection of law, technology, norms, and markets. Building upon eight in-depth case studies and use cases, respectively, this paper seeks to distill key observations and provide a high-level analysis of some of the structural elements that characterize varying governance regimes, with a focus on intermediary liability regimes and their evolution. Analyzing online intermediary governance issues from multiple perspectives, and in the context of different cultures and regulatory frameworks, immediately creates basic problems of semantic interoperability. Lacking a universally agreed-upon definition, the synthesis paper and its’ underlying case studies are based on a broad and phenomenon-oriented notion of online intermediaries, as further described below. In methodological terms, the observations shared in the synthesis paper offer a selective reading and interpretation by the authors of the broader take-ways of a diverse set of case studies examining online intermediary governance frameworks and issues in Brazil, the European Union, India, South Korea, the United States, Thailand, Turkey, and Vietnam. These case studies, in turn, have emerged in the context of an international research pilot by the Global Network of Internet & Society Research Centers (NoC), through a process of in-person consultations and remote collaborations among the researchers, and are based on a set of broader questions regarding the role of online intermediaries in the digital age

    Cybersecurity

    Get PDF
    The Internet has given rise to new opportunities for the public sector to improve efficiency and better serve constituents in the form of e-government. But with a rapidly growing user base globally and an increasing reliance on the Internet, digital tools are also exposing the public sector to new risks. An accessible primer, Cybersecurity: Public Sector Threats and Responses focuses on the convergence of globalization, connectivity, and the migration of public sector functions online. It identifies the challenges you need to be aware of and examines emerging trends and strategies from around the world. Offering practical guidance for addressing contemporary risks, the book is organized into three sections: Global Trends—considers international e-government trends, includes case studies of common cyber threats and presents efforts of the premier global institution in the field National and Local Policy Approaches—examines the current policy environment in the United States and Europe and illustrates challenges at all levels of government Practical Considerations—explains how to prepare for cyber attacks, including an overview of relevant U.S. Federal cyber incident response policies, an organizational framework for assessing risk, and emerging trends Also suitable for classroom use, this book will help you understand the threats facing your organization and the issues to consider when thinking about cybersecurity from a policy perspective

    European Information Technology Observatory 1998

    Get PDF
    corecore