12 research outputs found

    Hybrid Epidemics - A Case Study on Computer Worm Conficker

    Full text link
    Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading, local, neighbourhood and global to capture the worm's spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conifcker epidemic. The model is then used to explore the trade-off between spreading modes in determining the worm's effectiveness. Our results show that the Conficker epidemic is an example of a critically hybrid epidemic, in which the different modes of spreading in isolation do not lead to successful epidemics. Such hybrid spreading strategies may be used beneficially to provide the most effective strategies for promulgating information across a large population. When used maliciously, however, they can present a dangerous challenge to current internet security protocols

    Hybrid Epidemics - A Case Study on Computer Worm Conficker

    Get PDF
    Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading, local, neighbourhood and global to capture the worm's spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conifcker epidemic. The model is then used to explore the trade-off between spreading modes in determining the worm's effectiveness. Our results show that the Conficker epidemic is an example of a critically hybrid epidemic, in which the different modes of spreading in isolation do not lead to successful epidemics. Such hybrid spreading strategies may be used beneficially to provide the most effective strategies for promulgating information across a large population. When used maliciously, however, they can present a dangerous challenge to current internet security protocols

    An Analysis of Internet Background Radiation within an African IPv4 netblock

    Get PDF
    The use of passive network sensors has in the past proven to be quite effective in monitoring and analysing the current state of traffic on a network. Internet traffic destined to a routable, yet unused address block is often referred to as Internet Background Radiation (IBR) and characterised as unsolicited. This unsolicited traffic is however quite valuable to researchers in that it allows them to study the traffic patterns in a covert manner. IBR is largely composed of network and port scanning traffic, backscatter packets from virus and malware activity and to a lesser extent, misconfiguration of network devices. This research answers the following two questions: (1) What is the current state of IBR within the context of a South African IP address space and (2) Can any anomalies be detected in the traffic, with specific reference to current global malware attacks such as Mirai and similar. Rhodes University operates five IPv4 passive network sensors, commonly known as network telescopes, each monitoring its own /24 IP address block. The oldest of these network telescopes has been collecting traffic for over a decade, with the newest being established in 2011. This research focuses on the in-depth analysis of the traffic captured by one telescope in the 155/8 range over a 12 month period, from January to December 2017. The traffic was analysed and classified according the protocol, TCP flag, source IP address, destination port, packet count and payload size. Apart from the normal network traffic graphs and tables, a geographic heatmap of source traffic was also created, based on the source IP address. Spikes and noticeable variances in traffic patterns were further investigated and evidence of Mirai like malware activity was observed. Network and port scanning were found to comprise the largest amount of traffic, accounting for over 90% of the total IBR. Various scanning techniques were identified, including low level passive scanning and much higher level active scanning

    Monitoring Network Telescopes and Inferring Anomalous Traffic Through the Prediction of Probing Rates

    Get PDF
    International audienceNetwork reconnaissance is the first step precedinga cyber-attack. Hence, monitoring the probing activities is im-perative to help security practitioners enhancing their awarenessabout Internet’s large-scale events or peculiar events targetingtheir network. In this paper, we present a framework foran improved and efficient monitoring of the probing activi-ties targeting network telescopes. Particularly, we model theprobing rates which are a good indicator for measuring thecyber-security risk targeting network services. The approachconsists of first inferring groups of network ports sharing similarprobing characteristics through a new affinity metric capturingboth temporal and semantic similarities between ports. Then,sequences of probing rates targeting similar ports are used asinputs to stacked Long Short-Term Memory (LSTM) neuralnetworks to predict probing rates 1 hour and 1 day in advance.Finally, we describe two monitoring indicators that use theprediction models to infer anomalous probing traffic and toraise early threat warnings. We show that LSTM networkscan accurately predict probing rates, outperforming the non-stationary autoregressive model, and we demonstrate that themonitoring indicators are efficient in assessing the cyber-securityrisk related to vulnerability disclosur

    Hybrid epidemic spreading - from Internet worms to HIV infection

    Get PDF
    Epidemic phenomena are ubiquitous, ranging from infectious diseases, computer viruses, to information dissemination. Epidemics have traditionally been studied as a single spreading process, either in a fully mixed population or on a network. Many epidemics, however, are hybrid, employing more than one spreading mechanism. For example, the Internet worm Conficker spreads locally targeting neighbouring computers in local networks as well as globally by randomly probing any computer on the Internet. This thesis aims to investigate fundamental questions, such as whether a mix of spreading mechanisms gives hybrid epidemics any advantage, and what are the implications for promoting or mitigating such epidemics. We firstly propose a general and simple framework to model hybrid epidemics. Based on theoretical analysis and numerical simulations, we show that properties of hybrid epidemics are critically determined by a hybrid tradeoff, which defines the proportion of resource allocated to local and global spreading mechanisms. We then study two distinct examples of hybrid epidemics: the Internet worm Conficker and the Human Immunodeficiency Virus (HIV) infection within the human body. Using Internet measurement data, we reveal how Conficker combines ineffective spreading mechanisms to produce a serious outbreak on the Internet. We propose a mathematical model that can accurately recapitulate the entire HIV infection course as observed in clinical trials. Our study provides novel insights into the two parallel infection channels of HIV, i.e. cell-free spreading and cell-to-cell spreading, and their joint effect on HIV infection and treatment. In summary, this thesis has advanced our understanding of hybrid epidemics. It has provided mathematical frameworks for future analysis. It has demonstrated, with two successful case studies, that such research can have a significant impact on important issues such as cyberspace security and human health

    Remote fidelity of Container-Based Network Emulators

    Get PDF
    This thesis examines if Container-Based Network Emulators (CBNEs) are able to instantiate emulated nodes that provide sufficient realism to be used in information security experiments. The realism measure used is based on the information available from the point of view of a remote attacker. During the evaluation of a Container-Based Network Emulator (CBNE) as a platform to replicate production networks for information security experiments, it was observed that nmap fingerprinting returned Operating System (OS) family and version results inconsistent with that of the host Operating System (OS). CBNEs utilise Linux namespaces, the technology used for containerisation, to instantiate \emulated" hosts for experimental networks. Linux containers partition resources of the host OS to create lightweight virtual machines that share a single OS kernel. As all emulated hosts share the same kernel in a CBNE network, there is a reasonable expectation that the fingerprints of the host OS and emulated hosts should be the same. Based on how CBNEs instantiate emulated networks and that fingerprinting returned inconsistent results, it was hypothesised that the technologies used to construct CBNEs are capable of influencing fingerprints generated by utilities such as nmap. It was predicted that hosts emulated using different CBNEs would show deviations in remotely generated fingerprints when compared to fingerprints generated for the host OS. An experimental network consisting of two emulated hosts and a Layer 2 switch was instantiated on multiple CBNEs using the same host OS. Active and passive fingerprinting was conducted between the emulated hosts to generate fingerprints and OS family and version matches. Passive fingerprinting failed to produce OS family and version matches as the fingerprint databases for these utilities are no longer maintained. For active fingerprinting the OS family results were consistent between tested systems and the host OS, though OS version results reported was inconsistent. A comparison of the generated fingerprints revealed that for certain CBNEs fingerprint features related to network stack optimisations of the host OS deviated from other CBNEs and the host OS. The hypothesis that CBNEs can influence remotely generated fingerprints was partially confirmed. One CBNE system modified Linux kernel networking options, causing a deviation from fingerprints generated for other tested systems and the host OS. The hypothesis was also partially rejected as the technologies used by CBNEs do not influence the remote fidelity of emulated hosts.Thesis (MSc) -- Faculty of Science, Computer Science, 202

    A framework for malicious host fingerprinting using distributed network sensors

    Get PDF
    Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area

    An exploration of the overlap between open source threat intelligence and active internet background radiation

    Get PDF
    Organisations and individuals are facing increasing persistent threats on the Internet from worms, port scanners, and malicious software (malware). These threats are constantly evolving as attack techniques are discovered. To aid in the detection and prevention of such threats, and to stay ahead of the adversaries conducting the attacks, security specialists are utilising Threat Intelligence (TI) data in their defense strategies. TI data can be obtained from a variety of different sources such as private routers, firewall logs, public archives, and public or private network telescopes. However, at the rate and ease at which TI is produced and published, specifically Open Source Threat Intelligence (OSINT), the quality is dropping, resulting in fragmented, context-less and variable data. This research utilised two sets of TI data, a collection of OSINT and active Internet Background Radiation (IBR). The data was collected over a period of 12 months, from 37 publicly available OSINT datasets and five IBR datasets. Through the identification and analysis of common data between the OSINT and IBR datasets, this research was able to gain insight into how effective OSINT is at detecting and potentially reducing ongoing malicious Internet traffic. As part of this research, a minimal framework for the collection, processing/analysis, and distribution of OSINT was developed and tested. The research focused on exploring areas in common between the two datasets, with the intention of creating an enriched, contextualised, and reduced set of malicious source IP addresses that could be published for consumers to use in their own environment. The findings of this research pointed towards a persistent group of IP addresses observed on both datasets, over the period under research. Using these persistent IP addresses, the research was able to identify specific services being targeted. Amongst these persistent IP addresses were significant packets from Mirai like IoT Malware on port 23/tcp and 2323/tcp as well as general scanning activity on port 445/TCP

    Gaining cyber security insight through an analysis of open source intelligence data: an East African case study

    Get PDF
    With each passing year the number of Internet users and connected devices grows, and this is particularly so in Africa. This growth brings with it an increase in the prevalence cyber-attacks. Looking at the current state of affairs, cybersecurity incidents are more likely to increase in African countries mainly due to the increased prevalence and affordability of broadband connectivity which is coupled with lack of online security awareness. The adoption of mobile banking has aggravated the situation making the continent more attractive to hackers who bank on the malpractices of users. Using Open Source Intelligence (OSINT) data sources like Sentient Hvper-Optimised Data Access Network (SHODAN) and Internet Background Radiation (IBR), this research explores the prevalence of vulnerabilities and their accessibility to evber threat actors. The research focuses on the East African Community (EAC) comprising of Tanzania, Kenya, Malawi, and Uganda, An IBR data set collected by a Rhodes University network telescope spanning over 72 months was used in this research, along with two snapshot period of data from the SHODAN project. The findings shows that there is a significant risk to systems within the EAC, particularly using the SHODAN data. The MITRE CVSS threat scoring system was applied to this research using FREAK and Heartbleed as sample vulnerabilities identified in EAC, When looking at IBR, the research has shown that attackers can use either destination ports or IP source addresses to perform an attack which if not attended to may be reused yearly until later on move to the allocated IP address space once it starts making random probes. The moment it finds one vulnerable client on the network it spreads throughout like a worm, DDoS is one the attacks that can be generated from IBR, Since the SHODAN dataset had two collection points, the study has shown the changes that have occurred in Malawi and Tanzania for a period of 14 months by using three variables i.e, device type, operating systems, and ports. The research has also identified vulnerable devices in all the four countries. Apart from that, the study identified operating systems, products, OpenSSL, ports and ISPs as some of the variables that can be used to identify vulnerabilities in systems. In the ease of OpenSSL and products, this research went further by identifying the type of attack that can occur and its associated CVE-ID

    Bolvedere: a scalable network flow threat analysis system

    Get PDF
    Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system
    corecore