12 research outputs found

    D-FENS: DNS Filtering & Extraction Network System for Malicious Domain Names

    Get PDF
    While the DNS (Domain Name System) has become a cornerstone for the operation of the Internet, it has also fostered creative cases of maliciousness, including phishing, typosquatting, and botnet communication among others. To address this problem, this dissertation focuses on identifying and mitigating such malicious domain names through prior knowledge and machine learning. In the first part of this dissertation, we explore a method of registering domain names with deliberate typographical mistakes (i.e., typosquatting) to masquerade as popular and well-established domain names. To understand the effectiveness of typosquatting, we conducted a user study which helped shed light on which techniques were more successful than others in deceiving users. While certain techniques fared better than others, they failed to take the context of the user into account. Therefore, in the second part of this dissertation we look at the possibility of an advanced attack which takes context into account when generating domain names. The main idea is determining the possibility for an adversary to improve their success rate of deceiving users with specifically-targeted malicious domain names. While these malicious domains typically target users, other types of domain names are generated by botnets for command & control (C2) communication. Therefore, in the third part of this dissertation we investigate domain generation algorithms (DGA) used by botnets and propose a method to identify DGA-based domain names. By analyzing DNS traffic for certain patterns of NXDomain (non-existent domain) query responses, we can accurately predict DGA-based domain names before they are registered. Given all of these approaches to malicious domain names, we ultimately propose a system called D-FENS (DNS Filtering & Extraction Network System). D-FENS uses machine learning and prior knowledge to accurately predict unreported malicious domain names in real-time, thereby preventing Internet devices from unknowingly connecting to a potentially malicious domain name

    DNS traffic based classifiers for the automatic classification of botnet domains

    Get PDF
    Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level

    Detecting Web-Based Botnets Using Bot Communication Traffic Features

    Get PDF
    Web-based botnets are popular nowadays. A Web-based botnet is a botnet whose C&C server and bots use HTTP protocol, the most universal and supported network protocol, to communicate with each other. Because the botnet communication can be hidden easily by attackers behind the relatively massive HTTP traffic, administrators of network equipment, such as routers and switches, cannot block such suspicious traffic directly regardless of costs. Based on the clients constituent of a Web server and characteristics of HTTP responses sent to clients from the server, this paper proposes a traffic inspection solution, called Web-based Botnet Detector (WBD). WBD is able to detect suspicious C&C (Command-and-Control) servers of HTTP botnets regardless of whether the botnet commands are encrypted or hidden in normal Web pages. More than 500 GB real network traces collected from 11 backbone routers are used to evaluate our method. Experimental results show that the false positive rate of WBD is 0.42%

    An efficient approach to online bot detection based on a reinforcement learning technique

    Get PDF
    In recent years, Botnets have been adopted as a popular method used to carry and spread many malicious codes on the Internet. These codes pave the way to conducting many fraudulent activities, including spam mail, distributed denial of service attacks (DDoS) and click fraud. While many Botnets are set up using a centralized communication architecture such as Internet Relay Chat (IRC) and Hypertext Transfer Protocol (HTTP), peer-to-peer (P2P) Botnets can adopt a decentralized architecture using an overlay network for exchanging command and control (C&C) messages, which is a more resilient and robust communication channel infrastructure. Without a centralized point for C&C servers, P2P Botnets are more flexible to defeat countermeasures and detection procedures than traditional centralized Botnets. Several Botnet detection techniques have been proposed, but Botnet detection is still a very challenging task for the Internet security community because Botnets execute attacks stealthily in the dramatically growing volumes of network traffic. However, current Botnet detection schemes face significant problem of efficiency and adaptability. The present study combined a traffic reduction approach with reinforcement learning (RL) method in order to create an online Bot detection system. The proposed framework adopts the idea of RL to improve the system dynamically over time. In addition, the traffic reduction method is used to set up a lightweight and fast online detection method. Moreover, a host feature based on traffic at the connection-level was designed, which can identify Bot host behaviour. Therefore, the proposed technique can potentially be applied to any encrypted network traffic since it depends only on the information obtained from packets header. Therefore, it does not require Deep Packet Inspection (DPI) and cannot be confused with payload encryption techniques. The network traffic reduction technique reduces packets input to the detection system, but the proposed solution achieves good a detection rate of 98.3% as well as a low false positive rate (FPR) of 0.012% in the online evaluation. Comparison with other techniques on the same dataset shows that our strategy outperforms existing methods. The proposed solution was evaluated and tested using real network traffic datasets to increase the validity of the solution

    Cyber Security and Critical Infrastructures

    Get PDF
    This book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles: an editorial explaining current challenges, innovative solutions, real-world experiences including critical infrastructure, 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems, and a review of cloud, edge computing, and fog's security and privacy issues
    corecore