41 research outputs found

    Distributed Denial-of-Service Characterization

    Full text link

    Towards Coordinated, Network-Wide Traffic Monitoring for Early Detection of DDoS Flooding Attacks

    Get PDF
    DDoS flooding attacks are one of the biggest concerns for security professionals and they are typically explicit attempts to disrupt legitimate users' access to services. Developing a comprehensive defense mechanism against such attacks requires a comprehensive understanding of the problem and the techniques that have been used thus far in preventing, detecting, and responding to various such attacks. In this thesis, we dig into the problem of DDoS flooding attacks from four directions: (1) We study the origin of these attacks, their variations, and various existing defense mechanisms against them. Our literature review gives insight into a list of key required features for the next generation of DDoS flooding defense mechanisms. The most important requirement on this list is to see more distributed DDoS flooding defense mechanisms in near future, (2) In such systems, the success in detecting DDoS flooding attacks earlier and in a distributed fashion is highly dependent on the quality and quantity of the traffic flows that are covered by the employed traffic monitoring mechanisms. This motivates us to study and understand the challenges of existing traffic monitoring mechanisms, (3) We propose a novel distributed, coordinated, network-wide traffic monitoring (DiCoTraM) approach that addresses the key challenges of current traffic monitoring mechanisms. DiCoTraM enhances flow coverage to enable effective, early detection of DDoS flooding attacks. We compare and evaluate the performance of DiCoTraM with various other traffic monitoring mechanisms in terms of their total flow coverage and DDoS flooding attack flow coverage, and (4) We evaluate the effectiveness of DiCoTraM with cSamp, an existing traffic monitoring mechanism that outperforms most of other traffic monitoring mechanisms, with regards to supporting early detection of DDoS flooding attacks (i.e., at the intermediate network) by employing two existing DDoS flooding detection mechanisms over them. We then compare the effectiveness of DiCoTraM with that of cSamp by comparing the detection rates and false positive rates achieved when the selected detection mechanisms are employed over DiCoTraM and cSamp. The results show that DiCoTraM outperforms other traffic monitoring mechanisms in terms of DDoS flooding attack flow coverage

    Achieving network resiliency using sound theoretical and practical methods

    Get PDF
    Computer networks have revolutionized the life of every citizen in our modern intercon- nected society. The impact of networked systems spans every aspect of our lives, from financial transactions to healthcare and critical services, making these systems an attractive target for malicious entities that aim to make financial or political profit. Specifically, the past decade has witnessed an astounding increase in the number and complexity of sophisti- cated and targeted attacks, known as advanced persistent threats (APT). Those attacks led to a paradigm shift in the security and reliability communities’ perspective on system design; researchers and government agencies accepted the inevitability of incidents and malicious attacks, and marshaled their efforts into the design of resilient systems. Rather than focusing solely on preventing failures and attacks, resilient systems are able to maintain an acceptable level of operation in the presence of such incidents, and then recover gracefully into normal operation. Alongside prevention, resilient system design focuses on incident detection as well as timely response. Unfortunately, the resiliency efforts of research and industry experts have been hindered by an apparent schism between theory and practice, which allows attackers to maintain the upper hand advantage. This lack of compatibility between the theory and practice of system design is attributed to the following challenges. First, theoreticians often make impractical and unjustifiable assumptions that allow for mathematical tractability while sacrificing accuracy. Second, the security and reliability communities often lack clear definitions of success criteria when comparing different system models and designs. Third, system designers often make implicit or unstated assumptions to favor practicality and ease of design. Finally, resilient systems are tested in private and isolated environments where validation and reproducibility of the results are not publicly accessible. In this thesis, we set about showing that the proper synergy between theoretical anal- ysis and practical design can enhance the resiliency of networked systems. We illustrate the benefits of this synergy by presenting resiliency approaches that target the inter- and intra-networking levels. At the inter-networking level, we present CPuzzle as a means to protect the transport control protocol (TCP) connection establishment channel from state- exhaustion distributed denial of service attacks (DDoS). CPuzzle leverages client puzzles to limit the rate at which misbehaving users can establish TCP connections. We modeled the problem of determining the puzzle difficulty as a Stackleberg game and solve for the equilibrium strategy that balances the users’ utilizes against CPuzzle’s resilience capabilities. Furthermore, to handle volumetric DDoS attacks, we extend CPuzzle and implement Midgard, a cooperative approach that involves end-users in the process of tolerating and neutralizing DDoS attacks. Midgard is a middlebox that resides at the edge of an Internet service provider’s network and uses client puzzles at the IP level to allocate bandwidth to its users. At the intra-networking level, we present sShield, a game-theoretic network response engine that manipulates a network’s connectivity in response to an attacker who is moving laterally to compromise a high-value asset. To implement such decision making algorithms, we leverage the recent advances in software-defined networking (SDN) to collect logs and security alerts about the network and implement response actions. However, the programma- bility offered by SDN comes with an increased chance for design-time bugs that can have drastic consequences on the reliability and security of a networked system. We therefore introduce BiFrost, an open-source tool that aims to verify safety and security proper- ties about data-plane programs. BiFrost translates data-plane programs into functionally equivalent sequential circuits, and then uses well-established hardware reduction, abstrac- tion, and verification techniques to establish correctness proofs about data-plane programs. By focusing on those four key efforts, CPuzzle, Midgard, sShield, and BiFrost, we believe that this work illustrates the benefits that the synergy between theory and practice can bring into the world of resilient system design. This thesis is an attempt to pave the way for further cooperation and coordination between theoreticians and practitioners, in the hope of designing resilient networked systems

    Multi-agent-based DDoS detection on big data systems

    Get PDF
    The Hadoop framework has become the most deployed platform for processing Big Data. Despite its advantages, Hadoop s infrastructure is still deployed within the secured network perimeter because the framework lacks adequate inherent security mechanisms against various security threats. However, this approach is not sufficient for providing adequate security layer against attacks such as Distributed Denial of Service. Furthermore, current work to secure Hadoop s infrastructure against DDoS attacks is unable to provide a distributed node-level detection mechanism. This thesis presents a software agent-based framework that allows distributed, real-time intelligent monitoring and detection of DDoS attack at Hadoop s node-level. The agent s cognitive system is ingrained with cumulative sum statistical technique to analyse network utilisation and average server load and detect attacks from these measurements. The framework is a multi-agent architecture with transducer agents that interface with each Hadoop node to provide real-time detection mechanism. Moreover, the agents contextualise their beliefs by training themselves with the contextual information of each node and monitor the activities of the node to differentiate between normal and anomalous behaviours. In the experiments, the framework was exposed to TCP SYN and UDP flooding attacks during a legitimate MapReduce job on the Hadoop testbed. The experimental results were evaluated regarding performance metrics such as false-positive ratio, false-negative ratio and response time to attack. The results show that UDP and TCP SYN flooding attacks can be detected and confirmed on multiple nodes in nineteen seconds with 5.56% false-positive ration, 7.70% false-negative ratio and 91.5% success rate of detection. The results represent an improvement compare to the state-of the-ar

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms

    Reducing short flows' latency in the internet

    Get PDF
    Short flows are highly valuable in the modern Internet and are widely used by applications in the form of web requests or with user interactions. These kinds of applications are extremely sensitive to latency. A small additional delay, like one or two round trip times (RTTs), may easily cause user frustration and lose usability of services. In the most desirable scenario, we want to finish these kinds of flows in one network RTT. Furthermore, we would like the network's RTT to be as close as possible to the speed of light. Unfortunately, in the current Internet, there are many unnecessary delays caused by different kinds of policies--in particular, transmission protocol and routing policies--driving us far away from this goal. This thesis aims at answering the following two questions: How can we optimize the transmission protocol to reduce short flows' latency as close as possible to one RTT and why are network RTTs still significantly larger than the speed-of-light latency? To reduce the transmission latency, we focused on the two main components of short flows, connection establishment and data transmission. ASAP, a new naming and transport protocol, is introduced to reduce the time spent on initial TCP connections. It merges functionality of DNS and TCP's connection establishment functions by piggybacking the connection establishment procedure atop the DNS lookup process. With the help of ASAP, the host is able to save up to two-thirds of the time spent on initial connection without exposing significant DoS vulnerabilities. For data transmission, we designed a new control rate mechanism, Halfback, which achieves low latency with limited bandwidth overhead and only requires sender-side changes. Halfback has an aggressive startup phase, finishing transmission for most short flows in one RTT, together with a Reverse-Ordering Proactively Retransmission phase which helps the host to recovery quickly from packet loss caused by the aggressive startup phase. Halfback is able to achieve 56% smaller flow completion time on average and three times smaller in the 99th percentile. RTT between two hosts is able to be more than 6 times the speed-of-light latency for Directed Optical Fiber. To understand the composition of RTT inflation, we break down the path inflation on the end-to-end path into its contribution factors. Based on our result, 7.2% is caused by network topology, 18.8% is contributed by inter-domain routing policies, 54.9% is caused by peering policies, and 25.6% is caused by intra-domain routing policies. This result shows that the main component of the path inflation is caused by peering policies which may require more attention for future research. Besides this, we also analyze the changes of the inflation caused by each contributing factor across five years. According to our analysis, the total inflation has been reduced by around 6% each year since 2010

    Wide spectrum attribution: Using deception for attribution intelligence in cyber attacks

    Get PDF
    Modern cyber attacks have evolved considerably. The skill level required to conduct a cyber attack is low. Computing power is cheap, targets are diverse and plentiful. Point-and-click crimeware kits are widely circulated in the underground economy, while source code for sophisticated malware such as Stuxnet is available for all to download and repurpose. Despite decades of research into defensive techniques, such as firewalls, intrusion detection systems, anti-virus, code auditing, etc, the quantity of successful cyber attacks continues to increase, as does the number of vulnerabilities identified. Measures to identify perpetrators, known as attribution, have existed for as long as there have been cyber attacks. The most actively researched technical attribution techniques involve the marking and logging of network packets. These techniques are performed by network devices along the packet journey, which most often requires modification of existing router hardware and/or software, or the inclusion of additional devices. These modifications require wide-scale infrastructure changes that are not only complex and costly, but invoke legal, ethical and governance issues. The usefulness of these techniques is also often questioned, as attack actors use multiple stepping stones, often innocent systems that have been compromised, to mask the true source. As such, this thesis identifies that no publicly known previous work has been deployed on a wide-scale basis in the Internet infrastructure. This research investigates the use of an often overlooked tool for attribution: cyber de- ception. The main contribution of this work is a significant advancement in the field of deception and honeypots as technical attribution techniques. Specifically, the design and implementation of two novel honeypot approaches; i) Deception Inside Credential Engine (DICE), that uses policy and honeytokens to identify adversaries returning from different origins and ii) Adaptive Honeynet Framework (AHFW), an introspection and adaptive honeynet framework that uses actor-dependent triggers to modify the honeynet envi- ronment, to engage the adversary, increasing the quantity and diversity of interactions. The two approaches are based on a systematic review of the technical attribution litera- ture that was used to derive a set of requirements for honeypots as technical attribution techniques. Both approaches lead the way for further research in this field
    corecore