18,706 research outputs found

    An Analysis of Pre-Infection Detection Techniques for Botnets and other Malware

    Get PDF
    Traditional techniques for detecting malware, such as viruses, worms and rootkits, rely on identifying virus-specific signature definitions within network traffic, applications or memory. Because a sample of malware is required to define an attack signature, signature detection has drawbacks when accounting for malware code mutation, has limited use in zero-day protection and is a post-infection technique requiring malware to be present on a device in order to be detected. A malicious bot is a malware variant that interconnects with other bots to form a botnet. Amongst their multiple malicious uses, botnets are ideal for launching mass Distributed Denial of Services attacks against the ever increasing number of networked devices that are starting to form the Internet of Things and Smart Cities. Regardless of topology; centralised Command & Control or distributed Peer-to-Peer, bots must communicate with their commanding botmaster. This communication traffic can be used to detect malware activity in the cloud before it can evade network perimeter defences and to trace a route back to source to takedown the threat. This paper identifies the inefficiencies exhibited by signature-based detection when dealing with botnets. Total botnet eradication relies on traffic-based detection methods such as DNS record analysis, against which malware authors have multiple evasion techniques. Signature-based detection displays further inefficiencies when located within virtual environments which form the backbone of data centre infrastructures, providing malware with a new attack vector. This paper highlights a lack of techniques for detecting malicious bot activity within such environments, proposing an architecture based upon flow sampling protocols to detect botnets within virtualised environments

    Analyzing the influence of the sampling rate in the detection of malicious traffic on flow data

    Get PDF
    [EN] Cyberattacks are a growing concern for companies and public administrations. The literature shows that analyzing network-layer traffic can detect intrusion attempts. However, such detection usually implies studying every datagram in a computer network. Therefore, routers routing a significant volume of network traffic do not perform an in-depth analysis of every packet. Instead, they analyze traffic patterns based on network flows. However, even gathering and analyzing flow data has a high-computational cost, and therefore routers usually apply a sampling rate to generate flow data. Adjusting the sampling rate is a tricky problem. If the sampling rate is low, much information is lost and some cyberattacks may be neglected, but if the sampling rate is high, routers cannot deal with it. This paper tries to characterize the influence of this parameter in different detection methods based on machine learning. To do so, we trained and tested malicious-traffic detection models using synthetic flow data gathered with several sampling rates. Then, we double-check the above models with flow data from the public BoT-IoT dataset and with actual flow data collected on RedCAYLE, the Castilla y León regional academic network.S

    Botnet detection from drive-by downloads

    Get PDF
    The advancement in Information Technology has brought about an advancement in the development and deployment of malware. Bot Malware have brought about immense compromise in computer security. Various ways for the deployment of such bots have been devised by attackers and they are becoming stealthier and more evasive by the day. Detecting such bots has proven to be difficult even though there are various detection techniques. In this work, a packet capturing and analysis technique for detecting host-based bots on their characteristics and behavior is proposed. The system captures network traffic first, to establish normal traffic, then already captured botnet traffic was used to test the system. The system filters out HTTP packets and analyses these packets to further filter out botnet traffic from normal internet traffic. The system was able to detect malicious packets with a False Positive Rate of 0.2 and accuracy of 99.91%

    Implementation and evaluation of a botnet analysis and detection method in a virtual environment

    Get PDF
    Botnets are one of the biggest cyber threats. Botnets based on concepts that used for the development of malware or viruses before origin of the Internet in 1990s. Botnet is a form of malware controlled by a Botmaster using Command and Control (C&C). Since emerging of one of the first botnets PrettyPark in 1999, it has been a significant enhancement in last decade for botnet development techniques by hackers. Botnets of current age are with features such as P2P architecture, encrypted traffic, use of different protocols, stealth techniques and spreading through social networking websites such as Facebook and Bebo. With enhancements in botnet development, the objectives of cyber criminals advanced to get financial as well. ZeuS is one of the well known botnets of current with a main target is to get the financial gain. It uses advanced botnet techniques such as encrypted traffic, use of HTTP protocol and stealth techniques to hide itself from the OS. Overall objective of this thesis is application of botnet analysis and detection techniques on ZeuS bot to demonstrate that how these techniques are applicable to other modern botnets such as KoobFace, Torpig, and Kelihos etc. ZeuS code leaked in May 2011 to open the doors for hackers to utilise techniques used by ZeuS to develop new bots and for researchers to learn the internal working of one of the modern botnet of the current age. In this thesis, “ZeuS toolkit with Control Panel (CP)” is used. It contains tools to create a ZeuS bot executable with user defined configuration and ZeuS Control Panel (CP) developed in PHP and MySql, to install on a machine to act as a ZeuS “C&C server”. Ethically, according to “CSSR: British Computer Society Code of Conduct”, ZeuS botnet analysis is performed in a virtual environment with two machines i.e. “Bot victim with HIDS (Host Based Intrusion Detection System)” and “C&C server” that are isolated from host machine running VMware and the Internet. Bot executed to infect “Bot victim” machine with ZeuS bot to convert it into a “zombie” being controlled by “C&C server” machine running ZeuS Control Panel (CP). ZeuS bot analysis performed in three layers i.e. binary, application and communication layer. On binary layer analysis, reverse engineering tools used to reverse engineer the ZeuS executable to explore its internal. ZeuS reversed engineered C++ code by REC was not in a meaningful form. It indicates that ZeuS binary obfuscated using some algorithm. Only basic information i.e. version and header information for ZeuS bot executable could be found using PE Explorer tool. On application layer, during ZeuS bot execution, all activities related to threads/process, file system (.dll files accessed and files created) and registry changes captured using Procmon. Important information captured by Procmon is creation of a copy of bot executable (sdra64.exe) and data file “user.ds” created in windows subfolder “/system32” and in registry “Userinit” key modified by ZeuS to enable the ZeuS execution before Windows GUI appears (execution of Explorer.exe). On communication layer, packets during bot synchronisation with botmaster and bot commands sent by “C&C server” to “Bot victim” captured for to create rules for HIDS for signature based detection on “Bot victim”. These rules implemented and raised alarm as expected successfully. Anomaly based detection requires “learning” or profiling that requires interaction of machine on Internet. Ethically it is not possible in isolated virtual environment. DNS based detection and process to reveal a “rootkit” that modifies MBR (master boot record) of the hard disk, is not applicable for ZeuS analysis. Literature review of this thesis covers all aspects of botnet analysis and detection techniques regardless of that they are not applicable in this project ethically or ZeuS bot does not support them. Objective of providing this information is to give an overview of all analysis and detection techniques that are applicable to the modern botnets of current age

    CHID : conditional hybrid intrusion detection system for reducing false positives and resource consumption on malicous datasets

    Get PDF
    Inspecting packets to detect intrusions faces challenges when coping with a high volume of network traffic. Packet-based detection processes every payload on the wire, which degrades the performance of network intrusion detection system (NIDS). This issue requires an introduction of a flow-based NIDS that reduces the amount of data to be processed by examining aggregated information of related packets. However, flow-based detection still suffers from the generation of the false positive alerts due to incomplete data input. This study proposed a Conditional Hybrid Intrusion Detection (CHID) by combining the flow-based with packet-based detection. In addition, it is also aimed to improve the resource consumption of the packet-based detection approach. CHID applied attribute wrapper features evaluation algorithms that marked malicious flows for further analysis by the packet-based detection. Input Framework approach was employed for triggering packet flows between the packetbased and flow-based detections. A controlled testbed experiment was conducted to evaluate the performance of detection mechanism’s CHID using datasets obtained from on different traffic rates. The result of the evaluation showed that CHID gains a significant performance improvement in terms of resource consumption and packet drop rate, compared to the default packet-based detection implementation. At a 200 Mbps, CHID in IRC-bot scenario, can reduce 50.6% of memory usage and decreases 18.1% of the CPU utilization without packets drop. CHID approach can mitigate the false positive rate of flow-based detection and reduce the resource consumption of packet-based detection while preserving detection accuracy. CHID approach can be considered as generic system to be applied for monitoring of intrusion detection systems

    Bot recognition in a Web store: An approach based on unsupervised learning

    Get PDF
    Abstract Web traffic on e-business sites is increasingly dominated by artificial agents (Web bots) which pose a threat to the website security, privacy, and performance. To develop efficient bot detection methods and discover reliable e-customer behavioural patterns, the accurate separation of traffic generated by legitimate users and Web bots is necessary. This paper proposes a machine learning solution to the problem of bot and human session classification, with a specific application to e-commerce. The approach studied in this work explores the use of unsupervised learning (k-means and Graded Possibilistic c-Means), followed by supervised labelling of clusters, a generative learning strategy that decouples modelling the data from labelling them. Its efficiency is evaluated through experiments on real e-commerce data, in realistic conditions, and compared to that of supervised learning classifiers (a multi-layer perceptron neural network and a support vector machine). Results demonstrate that the classification based on unsupervised learning is very efficient, achieving a similar performance level as the fully supervised classification. This is an experimental indication that the bot recognition problem can be successfully dealt with using methods that are less sensitive to mislabelled data or missing labels. A very small fraction of sessions remain misclassified in both cases, so an in-depth analysis of misclassified samples was also performed. This analysis exposed the superiority of the proposed approach which was able to correctly recognize more bots, in fact, and identified more camouflaged agents, that had been erroneously labelled as humans
    corecore