101,872 research outputs found

    Denial-of-service attack modelling and detection for HTTP/2 services

    Get PDF
    Businesses and society alike have been heavily dependent on Internet-based services, albeit with experiences of constant and annoying disruptions caused by the adversary class. A malicious attack that can prevent establishment of Internet connections to web servers, initiated from legitimate client machines, is termed as a Denial of Service (DoS) attack; volume and intensity of which is rapidly growing thanks to the readily available attack tools and the ever-increasing network bandwidths. A majority of contemporary web servers are built on the HTTP/1.1 communication protocol. As a consequence, all literature found on DoS attack modelling and appertaining detection techniques, addresses only HTTP/1.x network traffic. This thesis presents a model of DoS attack traffic against servers employing the new communication protocol, namely HTTP/2. The HTTP/2 protocol significantly differs from its predecessor and introduces new messaging formats and data exchange mechanisms. This creates an urgent need to understand how malicious attacks including Denial of Service, can be launched against HTTP/2 services. Moreover, the ability of attackers to vary the network traffic models to stealthy affects web services, thereby requires extensive research and modelling. This research work not only provides a novel model for DoS attacks against HTTP/2 services, but also provides a model of stealthy variants of such attacks, that can disrupt routine web services. Specifically, HTTP/2 traffic patterns that consume computing resources of a server, such as CPU utilisation and memory consumption, were thoroughly explored and examined. The study presents four HTTP/2 attack models. The first being a flooding-based attack model, the second being a distributed model, the third and fourth are variant DoS attack models. The attack traffic analysis conducted in this study employed four machine learning techniques, namely Naïve Bayes, Decision Tree, JRip and Support Vector Machines. The HTTP/2 normal traffic model portrays online activities of human users. The model thus formulated was employed to also generate flash-crowd traffic, i.e. a large volume of normal traffic that incapacitates a web server, similar in fashion to a DoS attack, albeit with non-malicious intent. Flash-crowd traffic generated based on the defined model was used to populate the dataset of legitimate network traffic, to fuzz the machine learning-based attack detection process. The two variants of DoS attack traffic differed in terms of the traffic intensities and the inter-packet arrival delays introduced to better analyse the type and quality of DoS attacks that can be launched against HTTP/2 services. A detailed analysis of HTTP/2 features is also presented to rank relevant network traffic features for all four traffic models presented. These features were ranked based on legitimate as well as attack traffic observations conducted in this study. The study shows that machine learning-based analysis yields better classification performance, i.e. lower percentage of incorrectly classified instances, when the proposed HTTP/2 features are employed compared to when HTTP/1.1 features alone are used. The study shows how HTTP/2 DoS attack can be modelled, and how future work can extend the proposed model to create variant attack traffic models that can bypass intrusion-detection systems. Likewise, as the Internet traffic and the heterogeneity of Internet-connected devices are projected to increase significantly, legitimate traffic can yield varying traffic patterns, demanding further analysis. The significance of having current legitimate traffic datasets, together with the scope to extend the DoS attack models presented herewith, suggest that research in the DoS attack analysis and detection area will benefit from the work presented in this thesis

    Analysis of attacks on Web based applications

    Get PDF
    As the technology used to power Web-based applications continues to evolve, new security threats are emerging. Web 2.0 technology provides attackers with a whole new array of vulnerabilities to exploit. In this thesis, we present an analysis of the attacker activity aimed at a typical Web server based on the data collected on two high interaction honeypots over a one month period of time. The configuration of the honeypots resembles the typical three tier architecture of many real world Web servers. Our honeypots ran on the Windows XP operating system and featured attractive attack targets such as the Microsoft IIS Web server, MySQL database, and two Web 2.0-based applications (Wordpress and MediaWiki). This configuration allows for attacks on a component directly or through the other components. Our analysis includes detailed inspection of the network traffic and IIS logs as well as investigation of the System logs, where appropriate. We also develop a pattern recognition approach to classify TCP connections as port scans or vulnerability scans/attacks. Some of the conclusions of our analysis include: (1) the vast majority of malicious traffic was over the TCP protocol, (2) the majority of malicious traffic was targeted at Windows file sharing, HTTP, and SSH ports, (3) most attackers found our Web server through search-based strategies rather than IP-based strategies, (4) most of the malicious traffic was generated by a few unique attackers

    A survey of anomaly detection using data mining methods for hypertext transfer protocol web services

    Get PDF
    In contrast to traditional Intrusion Detection Systems (IDSs), data mining anomaly detection methods/techniques has been widely used in the domain of network traffic data for intrusion detection and cyber threat. Data mining is widely recognized as popular and important intelligent and automatic tools to assist humans in big data security analysis and anomaly detection over IDSs. In this study we discuss our review in data mining anomaly detection methods for HTTP web services. Today, many online careers and actions including online shopping and banking are running through web-services. Consequently, the role of Hypertext Transfer Protocol (HTTP) in web services is crucial, since it is the standard facilitator for communication protocol. Hence, among the intruders that bound attacks, HTTP is being considered as a vital middle objective. In the recent years, an effective system that has attracted the attention of the researchers is the anomaly detection which is based on data mining methods. We provided an overview on four general data mining techniques such as classification, clustering, semi-supervised and association rule mining. These data mining anomaly detection methods can be used to computing intelligent HTTP request data, which are necessary in describing user behavior. To meet the challenges of data mining techniques, we provide challenges and issues section for intrusion detection systems in HTTP web services

    Energy Measurement of SPDY Protocol on Mobile Platform

    Get PDF
    The past few years have witnessed an explosive growth in mobile Internet data traffic with web browsing being one of the key activities on mobile devices. There is tremendous interest in optimizing mobile web. In this regard, a new protocol called SPDY was introduced by Google to augment web browsing, however it\u27s impact on the device energy consumption is not clearly understood. In this work we evaluate the energy characteristics of SPDY-based web browsing on mobile devices. In order to measure the energy consumption of web activities, we use AT&T’s ARO [1]. This tool is widely accepted and used by the industry as well as academia for radio (LTE) energy measurement. However the tool was initially designed as a GUI and hence the efficiency for handling large-scaled data was compromised. The first part of the project involves optimizing ARO so that the program runs automatically on a fairly large amount of data with minimum time. In this process we reduced the running time of ARO by a factor of 13 in average compared to the baseline GUI version. In the second part of the project, we set up a SPDY proxy to be used in our evaluations, since there are limited number of websites that support SPDY at this moment. Further we conduct an initial evaluation comparing SPDY proxy-based download to traditional HTTP-based download. Our initial analysis shows that for 18 out of 20 pages that we evaluated SPDY is doing better than the existing HTTP energy wise. E.g. for 50% of pages, SPDY reduces the LTE energy consumption by at least 3J (\u3e20%) while for few pages the benefits are small and sometimes it is doing worse than HTTP. As part of future work, we will extend our evaluation to get a comprehensive understanding on the energy characteristics of SPDY and compare it to other better approaches

    Extending the Functionality of the Realm Gateway

    Get PDF
    The promise of 5G and Internet of Things (IoT) expects the coming years to witness substantial growth of connected devices. This increase in the number of connected devices further aggravates the IPv4 address exhaustion problem. Network Address Translation (NAT) is a widely known solution to cater to the issue of IPv4 address depletion but it poses an issue of reachability. Since Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol Secure (HTTPS) application layer protocols play a vital role in the communication of the mobile devices and IoT devices, the NAT reachability problem needs to be addressed particularly for these protocols. Realm Gateway (RGW) is a solution proposed to overcome the NAT traversal issue. It acts as a Destination NAT (DNAT) for inbound connections initiated towards the private hosts while acting as a Source NAT (SNAT) for the connections in the outbound direction. The DNAT functionality of RGW is based on a circular pool algorithm that relies on the Domain Name System (DNS) queries sent by the client to maintain the correct connection state. However, an additional reverse proxy is needed with RGW for dealing with HTTP and HTTPS connections. In this thesis, a custom Application Layer Gateway (ALG) is designed to enable end-to-end communication between the public clients and private web servers over HTTP and HTTPS. The ALG replaces the reverse proxy used in the original RGW software. Our solution uses a custom parser-lexer for the hostname detection and routing of the traffic to the correct back-end web server. Furthermore, we integrated the RGW with a policy management system called Security Policy Management (SPM) for storing and retrieving the policies of RGW. We analyzed the impact of the new extensions on the performance of RGW in terms of scalability and computational overhead. Our analysis shows that ALG's performance is directly dependent on the hardware specification of the system. ALG has an advantage over the reverse proxy as it does not require the private keys of the back-end servers for forwarding the encrypted HTTPS traffic. Therefore, using a system with powerful processing capabilities improves the performance of RGW as ALG outperforms the NGINX reverse proxy used in the original RGW solution

    Traffic measurement and analysis

    Get PDF
    Measurement and analysis of real traffic is important to gain knowledge about the characteristics of the traffic. Without measurement, it is impossible to build realistic traffic models. It is recent that data traffic was found to have self-similar properties. In this thesis work traffic captured on the network at SICS and on the Supernet, is shown to have this fractal-like behaviour. The traffic is also examined with respect to which protocols and packet sizes are present and in what proportions. In the SICS trace most packets are small, TCP is shown to be the predominant transport protocol and NNTP the most common application. In contrast to this, large UDP packets sent between not well-known ports dominates the Supernet traffic. Finally, characteristics of the client side of the WWW traffic are examined more closely. In order to extract useful information from the packet trace, web browsers use of TCP and HTTP is investigated including new features in HTTP/1.1 such as persistent connections and pipelining. Empirical probability distributions are derived describing session lengths, time between user clicks and the amount of data transferred due to a single user click. These probability distributions make up a simple model of WWW-sessions

    Web Conferencing Traffic - An Analysis using DimDim as Example

    Full text link
    In this paper, we present an evaluation of the Ethernet traffic for host and attendees of the popular opensource web conferencing system DimDim. While traditional Internet-centric approaches such as the MBONE have been used over the past decades, current trends for web-based conference systems make exclusive use of application-layer multicast. To allow for network dimensioning and QoS provisioning, an understanding of the underlying traffic characteristics is required. We find in our exemplary evaluations that the host of a web conference session produces a large amount of Ethernet traffic, largely due to the required control of the conference session, that is heavily-tailed distributed and exhibits additionally long-range dependence. For different groups of activities within a web conference session, we find distinctive characteristics of the generated traffic

    Systemization of Pluggable Transports for Censorship Resistance

    Full text link
    An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. In particular, the link between the censored client and entry point to the uncensored network is a frequent target of censorship due to the ease with which a nation-state censor can control it. A number of censorship resistance systems have been developed thus far to help circumvent blocking on this link, which we refer to as link circumvention systems (LCs). The variety and profusion of attack vectors available to a censor has led to an arms race, leading to a dramatic speed of evolution of LCs. Despite their inherent complexity and the breadth of work in this area, there is no systematic way to evaluate link circumvention systems and compare them against each other. In this paper, we (i) sketch an attack model to comprehensively explore a censor's capabilities, (ii) present an abstract model of a LC, a system that helps a censored client communicate with a server over the Internet while resisting censorship, (iii) describe an evaluation stack that underscores a layered approach to evaluate LCs, and (iv) systemize and evaluate existing censorship resistance systems that provide link circumvention. We highlight open challenges in the evaluation and development of LCs and discuss possible mitigations.Comment: Content from this paper was published in Proceedings on Privacy Enhancing Technologies (PoPETS), Volume 2016, Issue 4 (July 2016) as "SoK: Making Sense of Censorship Resistance Systems" by Sheharbano Khattak, Tariq Elahi, Laurent Simon, Colleen M. Swanson, Steven J. Murdoch and Ian Goldberg (DOI 10.1515/popets-2016-0028

    A Covert Channel Using Named Resources

    Full text link
    A network covert channel is created that uses resource names such as addresses to convey information, and that approximates typical user behavior in order to blend in with its environment. The channel correlates available resource names with a user defined code-space, and transmits its covert message by selectively accessing resources associated with the message codes. In this paper we focus on an implementation of the channel using the Hypertext Transfer Protocol (HTTP) with Uniform Resource Locators (URLs) as the message names, though the system can be used in conjunction with a variety of protocols. The covert channel does not modify expected protocol structure as might be detected by simple inspection, and our HTTP implementation emulates transaction level web user behavior in order to avoid detection by statistical or behavioral analysis.Comment: 9 page
    corecore