79 research outputs found

    A survey of defense mechanisms against distributed denial of service (DDOS) flooding attacks

    Get PDF
    Distributed Denial of Service (DDoS) flooding attacks are one of the biggest concerns for security professionals. DDoS flooding attacks are typically explicit attempts to disrupt legitimate users' access to services. Attackers usually gain access to a large number of computers by exploiting their vulnerabilities to set up attack armies (i.e., Botnets). Once an attack army has been set up, an attacker can invoke a coordinated, large-scale attack against one or more targets. Developing a comprehensive defense mechanism against identified and anticipated DDoS flooding attacks is a desired goal of the intrusion detection and prevention research community. However, the development of such a mechanism requires a comprehensive understanding of the problem and the techniques that have been used thus far in preventing, detecting, and responding to various DDoS flooding attacks. In this paper, we explore the scope of the DDoS flooding attack problem and attempts to combat it. We categorize the DDoS flooding attacks and classify existing countermeasures based on where and when they prevent, detect, and respond to the DDoS flooding attacks. Moreover, we highlight the need for a comprehensive distributed and collaborative defense approach. Our primary intention for this work is to stimulate the research community into developing creative, effective, efficient, and comprehensive prevention, detection, and response mechanisms that address the DDoS flooding problem before, during and after an actual attack. © 1998-2012 IEEE

    Increasing the robustness of networked systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 133-143).What popular news do you recall about networked systems? You've probably heard about the several hour failure at Amazon's computing utility that knocked down many startups for several hours, or the attacks that forced the Estonian government web-sites to be inaccessible for several days, or you may have observed inexplicably slow responses or errors from your favorite web site. Needless to say, keeping networked systems robust to attacks and failures is an increasingly significant problem. Why is it hard to keep networked systems robust? We believe that uncontrollable inputs and complex dependencies are the two main reasons. The owner of a web-site has little control on when users arrive; the operator of an ISP has little say in when a fiber gets cut; and the administrator of a campus network is unlikely to know exactly which switches or file-servers may be causing a user's sluggish performance. Despite unpredictable or malicious inputs and complex dependencies we would like a network to self-manage itself, i.e., diagnose its own faults and continue to maintain good performance. This dissertation presents a generic approach to harden networked systems by distinguishing between two scenarios. For systems that need to respond rapidly to unpredictable inputs, we design online solutions that re-optimize resource allocation as inputs change. For systems that need to diagnose the root cause of a problem in the presence of complex subsystem dependencies, we devise techniques to infer these dependencies from packet traces and build functional representations that facilitate reasoning about the most likely causes for faults. We present a few solutions, as examples of this approach, that tackle an important class of network failures. Specifically, we address (1) re-routing traffic around congestion when traffic spikes or links fail in internet service provider networks, (2) protecting websites from denial of service attacks that mimic legitimate users and (3) diagnosing causes of performance problems in enterprises and campus-wide networks. Through a combination of implementations, simulations and deployments, we show that our solutions advance the state-of-the-art.by Srikanth Kandula.Ph.D

    Prepare for VoIP Spam

    Get PDF

    Protecting web servers from distributed denial of service attack

    Get PDF
    This thesis developed a novel architecture and adaptive methods to detect and block Distributed Denial of Service attacks with minimal punishment to legitimate users. A real time scoring algorithm differentiated attackers from legitimate users. This architecture reduces the power consumption of a web server farm thus reducing the carbon footprint

    Addressing Automated Adversaries of Network Applications

    Get PDF
    The Internet supports a perpetually evolving patchwork of network services and applications. Popular applications include the World Wide Web, online commerce, online banking, email, instant messaging, multimedia streaming, and online video games. Practically all networked applications have a common objective: to directly or indirectly process requests generated by humans. Some users employ automation to establish an unfair advantage over non-automated users. The perceived and substantive damages that automated, adversarial users inflict on an application degrade its enjoyment and usability by legitimate users, and result in reputation and revenue loss for the application\u27s service provider. This dissertation examines three challenges critical to addressing the undesirable automation of networked applications. The first challenge explores individual methods that detect various automated behaviors. Detection methods range from observing unusual network-level request traffic to sensing anomalous client operation at the application-level. Since many detection methods are not individually conclusive, the second challenge investigates how to combine detection methods to accurately identify automated adversaries. The third challenge considers how to leverage the available knowledge to disincentivize adversary automation by nullifying their advantage over legitimate users. The thesis of this dissertation is that: there exist methods to detect automated behaviors with which an application\u27s service provider can identify and then systematically disincentivize automated adversaries. This dissertation evaluates this thesis using research performed on two network applications that have different access to the client software: Web-based services and multiplayer online games

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse
    • …
    corecore