438 research outputs found

    Scalable Techniques for Anomaly Detection

    Get PDF
    Computer networks are constantly being attacked by malicious entities for various reasons. Network based attacks include but are not limited to, Distributed Denial of Service (DDoS), DNS based attacks, Cross-site Scripting (XSS) etc. Such attacks have exploited either the network protocol or the end-host software vulnerabilities for perpetration. Current network traffic analysis techniques employed for detection and/or prevention of these anomalies suffer from significant delay or have only limited scalability because of their huge resource requirements. This dissertation proposes more scalable techniques for network anomaly detection. We propose using DNS analysis for detecting a wide variety of network anomalies. The use of DNS is motivated by the fact that DNS traffic comprises only 2-3% of total network traffic reducing the burden on anomaly detection resources. Our motivation additionally follows from the observation that almost any Internet activity (legitimate or otherwise) is marked by the use of DNS. We propose several techniques for DNS traffic analysis to distinguish anomalous DNS traffic patterns which in turn identify different categories of network attacks. First, we present MiND, a system to detect misdirected DNS packets arising due to poisoned name server records or due to local infections such as caused by worms like DNSChanger. MiND validates misdirected DNS packets using an externally collected database of authoritative name servers for second or third-level domains. We deploy this tool at the edge of a university campus network for evaluation. Secondly, we focus on domain-fluxing botnet detection by exploiting the high entropy inherent in the set of domains used for locating the Command and Control (C&C) server. We apply three metrics namely the Kullback-Leibler divergence, the Jaccard Index, and the Edit distance, to different groups of domain names present in Tier-1 ISP DNS traces obtained from South Asia and South America. Our evaluation successfully detects existing domain-fluxing botnets such as Conficker and also recognizes new botnets. We extend this approach by utilizing DNS failures to improve the latency of detection. Alternatively, we propose a system which uses temporal and entropy-based correlation between successful and failed DNS queries, for fluxing botnet detection. We also present an approach which computes the reputation of domains in a bipartite graph of hosts within a network, and the domains accessed by them. The inference technique utilizes belief propagation, an approximation algorithm for marginal probability estimation. The computation of reputation scores is seeded through a small fraction of domains found in black and white lists. An application of this technique, on an HTTP-proxy dataset from a large enterprise, shows a high detection rate with low false positive rates

    Using Botnet Technologies to Counteract Network Traffic Analysis

    Get PDF
    Botnets have been problematic for over a decade. They are used to launch malicious activities including DDoS (Distributed-Denial-of-Service), spamming, identity theft, unauthorized bitcoin mining and malware distribution. A recent nation-wide DDoS attacks caused by the Mirai botnet on 10/21/2016 involving 10s of millions of IP addresses took down Twitter, Spotify, Reddit, The New York Times, Pinterest, PayPal and other major websites. In response to take-down campaigns by security personnel, botmasters have developed technologies to evade detection. The most widely used evasion technique is DNS fast-flux, where the botmaster frequently changes the mapping between domain names and IP addresses of the C&C server so that it will be too late or too costly to trace the C&C server locations. Domain names generated with Domain Generation Algorithms (DGAs) are used as the \u27rendezvous\u27 points between botmasters and bots. This work focuses on how to apply botnet technologies (fast-flux and DGA) to counteract network traffic analysis, therefore protecting user privacy. A better understanding of botnet technologies also helps us be pro-active in defending against botnets. First, we proposed two new DGAs using hidden Markov models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) which can evade current detection methods and systems. Also, we developed two HMM-based DGA detection methods that can detect the botnet DGA-generated domain names with/without training sets. This helps security personnel understand the botnet phenomenon and develop pro-active tools to detect botnets. Second, we developed a distributed proxy system using fast-flux to evade national censorship and surveillance. The goal is to help journalists, human right advocates and NGOs in West Africa to have a secure and free Internet. Then we developed a covert data transport protocol to transform arbitrary message into real DNS traffic. We encode the message into benign-looking domain names generated by an HMM, which represents the statistical features of legitimate domain names. This can be used to evade Deep Packet Inspection (DPI) and protect user privacy in a two-way communication. Both applications serve as examples of applying botnet technologies to legitimate use. Finally, we proposed a new protocol obfuscation technique by transforming arbitrary network protocol into another (Network Time Protocol and a video game protocol of Minecraft as examples) in terms of packet syntax and side-channel features (inter-packet delay and packet size). This research uses botnet technologies to help normal users have secure and private communications over the Internet. From our botnet research, we conclude that network traffic is a malleable and artificial construct. Although existing patterns are easy to detect and characterize, they are also subject to modification and mimicry. This means that we can construct transducers to make any communication pattern look like any other communication pattern. This is neither bad nor good for security. It is a fact that we need to accept and use as best we can

    Pattern Programmable Kernel Filter for Bot Detection

    Get PDF
    Bots earn their unique name as they perform a wide variety of automated task. These tasks include stealing sensitive user information. Detection of bots using solutions such as behavioral correlation of flow records, group activity in DNS traffic, observing the periodic repeatability in communication, etc., lead to monitoring the network traffic and then classifying them as Bot or normal traffic. Other solutions for Bot detection include kernel level key stroke verification, system call initialization, IP black listing, etc. In the first two solutions there is no assurance that the packet carrying user information is prevented from being sent to the attacker and the latter suffers from the problem of IP spoofing. This motivated us to think of a solution that would filter out the malicious packets before being put onto the network. To come out with such a solution, a real time bot attack was generated with SpyEye Exploit kit and traffic characteristics were analyzed. The analysis revealed the existence of a unique repeated communication between the Zombie machine and the botmaster. This motivated us to propose, a Pattern Programmable Kernel Filter (PPKF) for filtering out the malicious packets generated by bots. PPKF was developed using the windows filtering platform (WFP) filter engine. PPKF was programmed to filter out the packets with unique pattern which were observed from the bot attack experiments. Further PPKF was found to completely suppress the flow of packets having the programmed uniqueness in them thus preventing the functioning of bots in terms of user information being sent to the Botmaster.Defence Science Journal, 2012, 62(1), pp.174-179, DOI:http://dx.doi.org/10.14429/dsj.62.142

    Unsupervised detection of botnet activities using frequent pattern tree mining

    Get PDF
    A botnet is a network of remotely-controlled infected computers that can send spam, spread viruses, or stage denial-of-serviceattacks, without the consent of the computer owners. Since the beginning of the 21st century, botnet activities have steadilyincreased, becoming one of the major concerns for Internet security. In fact, botnet activities are becoming more and moredifficult to be detected, because they make use of Peer-to-Peer protocols (eMule, Torrent, Frostwire, Vuze, Skype and manyothers). To improve the detectability of botnet activities, this paper introduces the idea of association analysis in the field ofdata mining, and proposes a system to detect botnets based on the FP-growth (Frequent Pattern Tree) frequent item miningalgorithm. The detection system is composed of three parts: packet collection processing, rule mining, and statistical analysisof rules. Its characteristic feature is the rule-based classification of different botnet behaviors in a fast and unsupervisedfashion. The effectiveness of the approach is validated in a scenario with 11 Peer-to-Peer host PCs, 42063 Non-Peer-to-Peerhost PCs, and 17 host PCs with three different botnet activities (Storm, Waledac and Zeus). The recognition accuracy of theproposed architecture is shown to be above 94%. The proposed method is shown to improve the results reported in literature
    • …
    corecore