13 research outputs found

    DNS traffic based classifiers for the automatic classification of botnet domains

    Get PDF
    Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level

    Fast flux botnet detection based on adaptive dynamic evolving spiking neural network

    Get PDF
    A botnet, a set of compromised machines controlled distantly by an attacker, is the basis of numerous security threats around the world. Command and Control (C&C) servers are the backbone of botnet communications, where the bots and botmaster send reports and attack orders to each other, respectively. Botnets are also categorised according to their C&C protocols. A Domain Name System (DNS) method known as Fast-Flux Service Network (FFSN) is a special type of botnet that has been engaged by bot herders to cover malicious botnet activities, and increase the lifetime of malicious servers by quickly changing the IP addresses of the domain name over time. Although several methods have been suggested for detecting FFSNs domains, nevertheless they have low detection accuracy especially with zero-day domain, quite a long detection time, and consume high memory storage. In this research we propose a new system called Fast Flux Killer System (FFKA) that has the ability to detect “zero-day” FF-Domains in online mode with an implementation constructed on Adaptive Dynamic evolving Spiking Neural Network (ADeSNN) and in an offline mode to enhance the classification process which is a novelty in this field. The adaptation includes the initial weight, testing criteria, parameters customization, and parameters adjustment. The proposed system is expected to detect fast flux domains in online mode with high detection accuracy and low false positive and false negative rates respectively. It is also expected to have a high level of performance and the proposed system is designed to work for a lifetime with low memory usage. Three public datasets are exploited in the experiments to show the effects of the adaptive ADeSNN algorithm, two of them conducted on the ADeSNN algorithm itself and the last one on the process of detecting fast flux domains. The experiments showed an improved accuracy when using the proposed adaptive ADeSNN over the original algorithm. It also achieved a high detection accuracy in detecting zero-day fast flux domains that was about (99.54%) in an online mode, when using the public fast flux dataset. Finally, the improvements made to the performance of the adaptive algorithm are confirmed by the experiments

    A framework for malicious host fingerprinting using distributed network sensors

    Get PDF
    Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area

    Fast Detection of Zero-Day Phishing Websites Using Machine Learning

    Get PDF
    The recent global growth in the number of internet users and online applications has led to a massive volume of personal data transactions taking place over the internet. In order to gain access to the valuable data and services involved for undertaking various malicious activities, attackers lure users to phishing websites that steal user credentials and other personal data required to impersonate their victims. Sophisticated phishing toolkits and flux networks are increasingly being used by attackers to create and host phishing websites, respectively, in order to increase the number of phishing attacks and evade detection. This has resulted in an increase in the number of new (zero-day) phishing websites. Anti-malware software and web browsers’ anti-phishing filters are widely used to detect the phishing websites thus preventing users from falling victim to phishing. However, these solutions mostly rely on blacklists of known phishing websites. In these techniques, the time lag between creation of a new phishing website and reporting it as malicious leaves a window during which users are exposed to the zero-day phishing websites. This has contributed to a global increase in the number of successful phishing attacks in recent years. To address the shortcoming, this research proposes three Machine Learning (ML)-based approaches for fast and highly accurate prediction of zero-day phishing websites using novel sets of prediction features. The first approach uses a novel set of 26 features based on URL structure, and webpage structure and contents to predict zero-day phishing webpages that collect users’ personal data. The other two approaches detect zero-day phishing webpages, through their hostnames, that are hosted in Fast Flux Service Networks (FFSNs) and Name Server IP Flux Networks (NSIFNs). The networks consist of frequently changing machines hosting malicious websites and their authoritative name servers respectively. The machines provide a layer of protection to the actual service hosts against blacklisting in order to prolong the active life span of the services. Consequently, the websites in these networks become more harmful than those hosted in normal networks. Aiming to address them, our second proposed approach predicts zero-day phishing hostnames hosted in FFSNs using a novel set of 56 features based on DNS, network and host characteristics of the hosting networks. Our last approach predicts zero-day phishing hostnames hosted in NSIFNs using a novel set of 11 features based on DNS and host characteristics of the hosting networks. The feature set in each approach is evaluated using 11 ML algorithms, achieving a high prediction performance with most of the algorithms. This indicates the relevance and robustness of the feature sets for their respective detection tasks. The feature sets also perform well against data collected over a later time period without retraining the data, indicating their long-term effectiveness in detecting the websites. The approaches use highly diversified feature sets which is expected to enhance the resistance to various detection evasion tactics. The measured prediction times of the first and the third approaches are sufficiently low for potential use for real-time protection of users. This thesis also introduces a multi-class classification technique for evaluating the feature sets in the second and third approaches. The technique predicts each of the hostname types as an independent outcome thus enabling experts to use type-specific measures in taking down the phishing websites. Lastly, highly accurate methods for labelling hostnames based on number of changes of IP addresses of authoritative name servers, monitored over a specific period of time, are proposed

    Efficient Algorithms to Compute Hierarchical Summaries from Big Data Streams

    Full text link
    Many data stream applications have hierarchical data; containing time, geographic locations, product information, clickstreams, server logs, IP addresses. A hierarchical summary of such volumous data offers multiple advantages including compactness, quick understanding, and abstraction. The goal of this thesis is to design algorithmic approaches for summarizing hierarchical data streams. First, this thesis provides a theoretical analysis of the benchmark hierarchical heavy hitters' algorithms and uncovers their shortcomings such as requiring high theoretical memory, updates and coverage problem. To address these shortcomings, this thesis proposes efficient algorithms which offer deterministic estimation accuracy using O(η/Δ) worst-case memory and O(η) worst-case time complexity per item, where Δ ∈ [0,1] is a user defined parameter and η is a small constant derived from the data. The proposed hierarchical heavy hitters' algorithms are shown to have improved significantly over existing algorithms both theoretically as well as empirically. Next, this thesis introduces a new concept called hierarchically correlated heavy hitters, which is different from existing hierarchical summarization techniques. The thesis provides a formal definition of the proposed concept and compares it with existing hierarchical summarization approaches both at definition level and empirically. It also proposes an efficient hierarchy-aware algorithm for computing hierarchically correlated heavy hitters. The proposed algorithm offers deterministic estimation accuracy using O(η / (Δ_p * Δ_s )) worst-case memory and O(η) worst-case time complexity per item, where η is as defined previously, and Δ_p ∈ [0,1], Δ_s ∈ [0,1] are other user defined parameters. Finally, the thesis proposes a special hierarchical data structure and algorithm to summarize spatiotemporal data. It can be used to extract interesting and useful patterns from high-speed spatiotemporal data streams at multiple spatial and temporal granularities. Theoretical and empirical analysis are provided, which show that the proposed data structure is very efficient concerning data storage and response to queries. It updates a single item in O(1) time and responds to a point query in O(1) time. Importantly, the memory requirement of the proposed data structure is independent of the size of the data and only depends on user-supplied parameters ψ ⃗ and φ ⃗. In summary, this thesis provides a general framework consisting of a set of algorithms and data structures to compute hierarchical summaries of the big data streams. All of the proposed algorithms exploit a lattice structure built from the hierarchical attributes of the data to compute different hierarchical summaries, which can be used to address various data analytic issues in many emerging applications

    Workload Modeling for Computer Systems Performance Evaluation

    Full text link

    Geo-spatial autocorrelation as a metric for the detection of fast-flux botnet domains

    No full text
    Botnets consist of thousands of hosts infected with malware. Botnet owners communicate with these hosts using Command and Control (C2) servers. These C2 servers are usually infected hosts which the botnet owners do not have physical access to. For this reason botnets can be shut down by taking over or blocking the C2 servers. Botnet owners have employed numerous shutdown avoidance techniques. One of these techniques, DNS Fast-Flux, relies on rapidly changing address records. The addresses returned by the Fast-Flux DNS servers consist of geographically widely distributed hosts. The distributed nature of Fast-Flux botnets differs from legitimate domains, which tend to have geographically clustered server locations. This paper examines the use of spatial autocorrelation techniques based on the geographic distribution of domain servers to detect Fast-Flux domains. Moran's I and Geary's C are used to produce classifiers using multiple geographic co-ordinate systems to produce efficient and accurate results. It is shown how Fast-Flux domains can be detected reliably while only a small percentage of false positives are produced

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore