22 research outputs found

    Discriminating DDoS flows from flash crowds using information distance

    Full text link
    Discriminating DDoS flooding attacks from flash crowds poses a tough challenge for the network security community. Because of the vulnerability of the original design of the Internet, attackers can easily mimic the patterns of legitimate network traffic to fly under the radar. The existing fingerprint or feature based algorithms are incapable to detect new attack strategies. In this paper, we aim to differentiate DDoS attack flows from flash crowds. We are motivated by the following fact: the attack flows are generated by the same prebuilt program (attack tools), however, flash crowds come from randomly distributed users all over the Internet. Therefore, the flow similarity among DDoS attack flows is much stronger than that among flash crowds. We employ abstract distance metrics, the Jeffrey distance, the Sibson distance, and the Hellinger distance to measure the similarity among flows to achieve our goal. We compared the three metrics and found that the Sibson distance is the most suitable one for our purpose. We apply our algorithm to the real datasets and the results indicate that the proposed algorithm can differentiate them with an accuracy around 65%.<br /

    Denial-of-service attack modelling and detection for HTTP/2 services

    Get PDF
    Businesses and society alike have been heavily dependent on Internet-based services, albeit with experiences of constant and annoying disruptions caused by the adversary class. A malicious attack that can prevent establishment of Internet connections to web servers, initiated from legitimate client machines, is termed as a Denial of Service (DoS) attack; volume and intensity of which is rapidly growing thanks to the readily available attack tools and the ever-increasing network bandwidths. A majority of contemporary web servers are built on the HTTP/1.1 communication protocol. As a consequence, all literature found on DoS attack modelling and appertaining detection techniques, addresses only HTTP/1.x network traffic. This thesis presents a model of DoS attack traffic against servers employing the new communication protocol, namely HTTP/2. The HTTP/2 protocol significantly differs from its predecessor and introduces new messaging formats and data exchange mechanisms. This creates an urgent need to understand how malicious attacks including Denial of Service, can be launched against HTTP/2 services. Moreover, the ability of attackers to vary the network traffic models to stealthy affects web services, thereby requires extensive research and modelling. This research work not only provides a novel model for DoS attacks against HTTP/2 services, but also provides a model of stealthy variants of such attacks, that can disrupt routine web services. Specifically, HTTP/2 traffic patterns that consume computing resources of a server, such as CPU utilisation and memory consumption, were thoroughly explored and examined. The study presents four HTTP/2 attack models. The first being a flooding-based attack model, the second being a distributed model, the third and fourth are variant DoS attack models. The attack traffic analysis conducted in this study employed four machine learning techniques, namely Naïve Bayes, Decision Tree, JRip and Support Vector Machines. The HTTP/2 normal traffic model portrays online activities of human users. The model thus formulated was employed to also generate flash-crowd traffic, i.e. a large volume of normal traffic that incapacitates a web server, similar in fashion to a DoS attack, albeit with non-malicious intent. Flash-crowd traffic generated based on the defined model was used to populate the dataset of legitimate network traffic, to fuzz the machine learning-based attack detection process. The two variants of DoS attack traffic differed in terms of the traffic intensities and the inter-packet arrival delays introduced to better analyse the type and quality of DoS attacks that can be launched against HTTP/2 services. A detailed analysis of HTTP/2 features is also presented to rank relevant network traffic features for all four traffic models presented. These features were ranked based on legitimate as well as attack traffic observations conducted in this study. The study shows that machine learning-based analysis yields better classification performance, i.e. lower percentage of incorrectly classified instances, when the proposed HTTP/2 features are employed compared to when HTTP/1.1 features alone are used. The study shows how HTTP/2 DoS attack can be modelled, and how future work can extend the proposed model to create variant attack traffic models that can bypass intrusion-detection systems. Likewise, as the Internet traffic and the heterogeneity of Internet-connected devices are projected to increase significantly, legitimate traffic can yield varying traffic patterns, demanding further analysis. The significance of having current legitimate traffic datasets, together with the scope to extend the DoS attack models presented herewith, suggest that research in the DoS attack analysis and detection area will benefit from the work presented in this thesis

    Bankrupting DoS Attackers

    Full text link
    To defend against denial-of-service (DoS) attacks, we employ a technique called resource burning (RB). RB is the verifiable expenditure of a resource, such as computational power, required from clients before receiving service from the server. To the best of our knowledge, we present the first DoS defense algorithms where the algorithmic cost -- the cost to both the server and the honest clients -- is bounded as a function of the attacker's cost. We model an omniscient, Byzantine attacker, and a server with access to an estimator that estimates the number of jobs from honest clients in any time interval. We examine two communication models: an idealized zero-latency model and a partially synchronous model. Notably, our algorithms for both models have asymptotically lower costs than the attacker's, as the attacker's costs grow large. Both algorithms use a simple rule to set required RB fees per job. We assume no prior knowledge of the number of jobs, the adversary's costs, or even the estimator's accuracy. However, these quantities parameterize the algorithms' costs. We also prove a lower bound on the cost of any randomized algorithm. This lower bound shows that our algorithms achieve asymptotically tight costs as the number of jobs grows unbounded, whenever the estimator output is accurate to within a constant factor
    corecore