471 research outputs found

    Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences

    Full text link
    In this survey, we first briefly review the current state of cyber attacks, highlighting significant recent changes in how and why such attacks are performed. We then investigate the mechanics of malware command and control (C2) establishment: we provide a comprehensive review of the techniques used by attackers to set up such a channel and to hide its presence from the attacked parties and the security tools they use. We then switch to the defensive side of the problem, and review approaches that have been proposed for the detection and disruption of C2 channels. We also map such techniques to widely-adopted security controls, emphasizing gaps or limitations (and success stories) in current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages. Listing abstract compressed from version appearing in repor

    SPECTRAL GRAPH-BASED CYBER DETECTION AND CLASSIFICATION SYSTEM WITH PHANTOM COMPONENTS

    Get PDF
    With cyber attacks on the rise, cyber defenders require new, innovative solutions to provide network protection. We propose a spectral graph-based cyber detection and classification (SGCDC) system using phantom components, the strong node concept, and the dual-degree matrix to detect, classify, and respond to worm and distributed denial-of-service (DDoS) attacks. The system is analyzed using absorbing Markov chains and a novel Levy-impulse model that characterizes network SYN traffic to determine the theoretical false-alarm rates of the system. The detection mechanism is analyzed in the face of network noise and congestion using Weyl’s theorem, the Davis-Kahan theorem, and a novel application of the n-dimensional Euclidean metric. The SGCDC system is validated using real-world and synthetic datasets, including the WannaCry and Blaster worms and a SYN flood attack. The system accurately detected and classified the attacks in all but one case studied. The known attacking nodes were identified in less than 0.27 sec for the DDoS attack, and the worm-infected nodes were identified in less than one second after the second infected node began the target search and discovery process for the WannaCry and Blaster worm attacks. The system also produced a false-alarm rate of less than 0.005 under a scenario. These results improve upon other non-spectral graph systems that have detection rates of less than 0.97 sec and false alarm rates as high as 0.095 sec for worm and DDoS attacks.Lieutenant Commander, United States NavyApproved for public release. distribution is unlimite

    Scalable schemes against Distributed Denial of Service attacks

    Get PDF
    Defense against Distributed Denial of Service (DDoS) attacks is one of the primary concerns on the Internet today. DDoS attacks are difficult to prevent because of the open, interconnected nature of the Internet and its underlying protocols, which can be used in several ways to deny service. Attackers hide their identity by using third parties such as private chat channels on IRC (Internet Relay Chat). They also insert false return IP address, spoofing, in a packet which makes it difficult for the victim to determine the packet\u27s origin. We propose three novel and realistic traceback mechanisms which offer many advantages over the existing schemes. All the three schemes take advantage of the Autonomous System topology and consider the fact that the attacker\u27s packets may traverse through a number of domains under different administrative control. Most of the traceback mechanisms make wrong assumptions that the network details of a company under an administrative control are disclosed to the public. For security reasons, this is not the case most of the times. The proposed schemes overcome this drawback by considering reconstruction at the inter and intra AS levels. Hierarchical Internet Traceback (HIT) and Simple Traceback Mechanism (STM) trace back to an attacker in two phases. In the first phase the attack originating Autonomous System is identified while in the second phase the attacker within an AS is identified. Both the schemes, HIT and STM, allow the victim to trace back to the attackers in a few seconds. Their computational overhead is very low and they scale to large distributed attacks with thousands of attackers. Fast Autonomous System Traceback allows complete attack path reconstruction with few packets. We use traceroute maps of real Internet topologies CAIDA\u27s skitter to simulate DDoS attacks and validate our design

    Adaptive Response System for Distributed Denial-of-Service Attacks

    No full text
    The continued prevalence and severe damaging effects of the Distributed Denial of Service (DDoS) attacks in today’s Internet raise growing security concerns and call for an immediate response to come up with better solutions to tackle DDoS attacks. The current DDoS prevention mechanisms are usually inflexible and determined attackers with knowledge of these mechanisms, could work around them. Most existing detection and response mechanisms are standalone systems which do not rely on adaptive updates to mitigate attacks. As different responses vary in their “leniency” in treating detected attack traffic, there is a need for an Adaptive Response System. We designed and implemented our DDoS Adaptive ResponsE (DARE) System, which is a distributed DDoS mitigation system capable of executing appropriate detection and mitigation responses automatically and adaptively according to the attacks. It supports easy integrations for both signature-based and anomaly-based detection modules. Additionally, the design of DARE’s individual components takes into consideration the strengths and weaknesses of existing defence mechanisms, and the characteristics and possible future mutations of DDoS attacks. These components consist of an Enhanced TCP SYN Attack Detector and Bloom-based Filter, a DDoS Flooding Attack Detector and Flow Identifier, and a Non Intrusive IP Traceback mechanism. The components work together interactively to adapt the detections and responses in accordance to the attack types. Experiments conducted on DARE show that the attack detection and mitigation are successfully completed within seconds, with about 60% to 86% of the attack traffic being dropped, while availability for legitimate and new legitimate requests is maintained. DARE is able to detect and trigger appropriate responses in accordance to the attacks being launched with high accuracy, effectiveness and efficiency. We also designed and implemented a Traffic Redirection Attack Protection System (TRAPS), a stand-alone DDoS attack detection and mitigation system for IPv6 networks. In TRAPS, the victim under attack verifies the authenticity of the source by performing virtual relocations to differentiate the legitimate traffic from the attack traffic. TRAPS requires minimal deployment effort and does not require modifications to the Internet infrastructure due to its incorporation of the Mobile IPv6 protocol. Experiments to test the feasibility of TRAPS were carried out in a testbed environment to verify that it would work with the existing Mobile IPv6 implementation. It was observed that the operations of each module were functioning correctly and TRAPS was able to successfully mitigate an attack launched with spoofed source IP addresses

    Scalable architecture for online prioritization of cyber threats

    Get PDF
    This paper proposes an innovative framework for the early detection of several cyber attacks, where the main component is an analytics core that gathers streams of raw data generated by network probes, builds several layer models representing different activities of internal hosts, analyzes intra-layer and inter-layer information. The online analysis of internal network activities at different levels distinguishes our approach with respect to most detection tools and algorithms focusing on separate network levels or interactions between internal and external hosts. Moreover, the integrated multi-layer analysis carried out through parallel processing reduces false positives and guarantees scalability with respect to the size of the network and the number of layers. As a further contribution, the proposed framework executes autonomous triage by assigning a risk score to each internal host. This key feature allows security experts to focus their attention on the few hosts with higher scores rather than wasting time on thousands of daily alerts and false alarms

    Botnet Detection Using Graph Based Feature Clustering

    Get PDF
    Detecting botnets in a network is crucial because bot-activities impact numerous areas such as security, finance, health care, and law enforcement. Most existing rule and flow-based detection methods may not be capable of detecting bot-activities in an efficient manner. Hence, designing a robust botnet-detection method is of high significance. In this study, we propose a botnet-detection methodology based on graph-based features. Self-Organizing Map is applied to establish the clusters of nodes in the network based on these features. Our method is capable of isolating bots in small clusters while containing most normal nodes in the big-clusters. A filtering procedure is also developed to further enhance the algorithm efficiency by removing inactive nodes from bot detection. The methodology is verified using real-world CTU-13 and ISCX botnet datasets and benchmarked against classification-based detection methods. The results show that our proposed method can efficiently detect the bots despite their varying behaviors

    Scalable Techniques for Anomaly Detection

    Get PDF
    Computer networks are constantly being attacked by malicious entities for various reasons. Network based attacks include but are not limited to, Distributed Denial of Service (DDoS), DNS based attacks, Cross-site Scripting (XSS) etc. Such attacks have exploited either the network protocol or the end-host software vulnerabilities for perpetration. Current network traffic analysis techniques employed for detection and/or prevention of these anomalies suffer from significant delay or have only limited scalability because of their huge resource requirements. This dissertation proposes more scalable techniques for network anomaly detection. We propose using DNS analysis for detecting a wide variety of network anomalies. The use of DNS is motivated by the fact that DNS traffic comprises only 2-3% of total network traffic reducing the burden on anomaly detection resources. Our motivation additionally follows from the observation that almost any Internet activity (legitimate or otherwise) is marked by the use of DNS. We propose several techniques for DNS traffic analysis to distinguish anomalous DNS traffic patterns which in turn identify different categories of network attacks. First, we present MiND, a system to detect misdirected DNS packets arising due to poisoned name server records or due to local infections such as caused by worms like DNSChanger. MiND validates misdirected DNS packets using an externally collected database of authoritative name servers for second or third-level domains. We deploy this tool at the edge of a university campus network for evaluation. Secondly, we focus on domain-fluxing botnet detection by exploiting the high entropy inherent in the set of domains used for locating the Command and Control (C&C) server. We apply three metrics namely the Kullback-Leibler divergence, the Jaccard Index, and the Edit distance, to different groups of domain names present in Tier-1 ISP DNS traces obtained from South Asia and South America. Our evaluation successfully detects existing domain-fluxing botnets such as Conficker and also recognizes new botnets. We extend this approach by utilizing DNS failures to improve the latency of detection. Alternatively, we propose a system which uses temporal and entropy-based correlation between successful and failed DNS queries, for fluxing botnet detection. We also present an approach which computes the reputation of domains in a bipartite graph of hosts within a network, and the domains accessed by them. The inference technique utilizes belief propagation, an approximation algorithm for marginal probability estimation. The computation of reputation scores is seeded through a small fraction of domains found in black and white lists. An application of this technique, on an HTTP-proxy dataset from a large enterprise, shows a high detection rate with low false positive rates

    An Analysis of Internet Background Radiation within an African IPv4 netblock

    Get PDF
    The use of passive network sensors has in the past proven to be quite effective in monitoring and analysing the current state of traffic on a network. Internet traffic destined to a routable, yet unused address block is often referred to as Internet Background Radiation (IBR) and characterised as unsolicited. This unsolicited traffic is however quite valuable to researchers in that it allows them to study the traffic patterns in a covert manner. IBR is largely composed of network and port scanning traffic, backscatter packets from virus and malware activity and to a lesser extent, misconfiguration of network devices. This research answers the following two questions: (1) What is the current state of IBR within the context of a South African IP address space and (2) Can any anomalies be detected in the traffic, with specific reference to current global malware attacks such as Mirai and similar. Rhodes University operates five IPv4 passive network sensors, commonly known as network telescopes, each monitoring its own /24 IP address block. The oldest of these network telescopes has been collecting traffic for over a decade, with the newest being established in 2011. This research focuses on the in-depth analysis of the traffic captured by one telescope in the 155/8 range over a 12 month period, from January to December 2017. The traffic was analysed and classified according the protocol, TCP flag, source IP address, destination port, packet count and payload size. Apart from the normal network traffic graphs and tables, a geographic heatmap of source traffic was also created, based on the source IP address. Spikes and noticeable variances in traffic patterns were further investigated and evidence of Mirai like malware activity was observed. Network and port scanning were found to comprise the largest amount of traffic, accounting for over 90% of the total IBR. Various scanning techniques were identified, including low level passive scanning and much higher level active scanning
    • …
    corecore