41 research outputs found

    An Innovative Signature Detection System for Polymorphic and Monomorphic Internet Worms Detection and Containment

    Get PDF
    Most current anti-worm systems and intrusion-detection systems use signature-based technology instead of anomaly-based technology. Signature-based technology can only detect known attacks with identified signatures. Existing anti-worm systems cannot detect unknown Internet scanning worms automatically because these systems do not depend upon worm behaviour but upon the worm’s signature. Most detection algorithms used in current detection systems target only monomorphic worm payloads and offer no defence against polymorphic worms, which changes the payload dynamically. Anomaly detection systems can detect unknown worms but usually suffer from a high false alarm rate. Detecting unknown worms is challenging, and the worm defence must be automated because worms spread quickly and can flood the Internet in a short time. This research proposes an accurate, robust and fast technique to detect and contain Internet worms (monomorphic and polymorphic). The detection technique uses specific failure connection statuses on specific protocols such as UDP, TCP, ICMP, TCP slow scanning and stealth scanning as characteristics of the worms. Whereas the containment utilizes flags and labels of the segment header and the source and destination ports to generate the traffic signature of the worms. Experiments using eight different worms (monomorphic and polymorphic) in a testbed environment were conducted to verify the performance of the proposed technique. The experiment results showed that the proposed technique could detect stealth scanning up to 30 times faster than the technique proposed by another researcher and had no false-positive alarms for all scanning detection cases. The experiments showed the proposed technique was capable of containing the worm because of the traffic signature’s uniqueness

    Novel Analytical Modelling-based Simulation of Worm Propagation in Unstructured Peer-to-Peer Networks

    No full text
    Millions of users world-wide are sharing content using Peer-to-Peer (P2P) networks, such as Skype and Bit Torrent. While such new innovations undoubtedly bring benefits, there are nevertheless some associated threats. One of the main hazards is that P2P worms can penetrate the network, even from a single node and then spread rapidly. Understanding the propagation process of such worms has always been a challenge for researchers. Different techniques, such as simulations and analytical models, have been adopted in the literature. While simulations provide results for specific input parameter values, analytical models are rather more general and potentially cover the whole spectrum of given parameter values. Many attempts have been made to model the worm propagation process in P2P networks. However, the reported analytical models to-date have failed to cover the whole spectrum of all relevant parameters and have therefore resulted in high false-positives. This consequently affects the immunization and mitigation strategies that are adopted to cope with an outbreak of worms. The first key contribution of this thesis is the development of a susceptible, exposed, infectious, and Recovered (SEIR) analytical model for the worm propagation process in a P2P network, taking into account different factors such as the configuration diversity of nodes, user behaviour and the infection time-lag. These factors have not been considered in an integrated form previously and have been either ignored or partially addressed in state-of-the-art analytical models. Our proposed SEIR analytical model holistically integrates, for the first time, these key factors in order to capture a more realistic representation of the whole worm propagation process. The second key contribution is the extension of the proposed SEIR model to the mobile M-SEIR model by investigating and incorporating the role of node mobility, the size of the worm and the bandwidth of wireless links in the worm propagation process in mobile P2P networks. The model was designed to be flexible and applicable to both wired and wireless nodes. The third contribution is the exploitation of a promising modelling paradigm, Agent-based Modelling (ABM), in the P2P worm modelling context. Specifically, to exploit the synergies between ABM and P2P, an integrated ABM-Based worm propagation model has been built and trialled in this research for the first time. The introduced model combines the implementation of common, complex P2P protocols, such as Gnutella and GIA, along with the aforementioned analytical models. Moreover, a comparative evaluation between ABM and conventional modelling tools has been carried out, to demonstrate the key benefits of ease of real-time analysis and visualisation. As a fourth contribution, the research was further extended by utilizing the proposed SEIR model to examine and evaluate a real-world data set on one of the most recent worms, namely, the Conficker worm. Verification of the model was achieved using ABM and conventional tools and by then comparing the results on the same data set with those derived from developed benchmark models. Finally, the research concludes that the worm propagation process is to a great extent affected by different factors such as configuration diversity, user-behaviour, the infection time lag and the mobility of nodes. It was found that the infection propagation values derived from state-of-the-art mathematical models are hypothetical and do not actually reflect real-world values. In summary, our comparative research study has shown that infection propagation can be reduced due to the natural immunity against worms that can be provided by a holistic exploitation of the range of factors proposed in this work

    Resilience Strategies for Network Challenge Detection, Identification and Remediation

    Get PDF
    The enormous growth of the Internet and its use in everyday life make it an attractive target for malicious users. As the network becomes more complex and sophisticated it becomes more vulnerable to attack. There is a pressing need for the future internet to be resilient, manageable and secure. Our research is on distributed challenge detection and is part of the EU Resumenet Project (Resilience and Survivability for Future Networking: Framework, Mechanisms and Experimental Evaluation). It aims to make networks more resilient to a wide range of challenges including malicious attacks, misconfiguration, faults, and operational overloads. Resilience means the ability of the network to provide an acceptable level of service in the face of significant challenges; it is a superset of commonly used definitions for survivability, dependability, and fault tolerance. Our proposed resilience strategy could detect a challenge situation by identifying an occurrence and impact in real time, then initiating appropriate remedial action. Action is autonomously taken to continue operations as much as possible and to mitigate the damage, and allowing an acceptable level of service to be maintained. The contribution of our work is the ability to mitigate a challenge as early as possible and rapidly detect its root cause. Also our proposed multi-stage policy based challenge detection system identifies both the existing and unforeseen challenges. This has been studied and demonstrated with an unknown worm attack. Our multi stage approach reduces the computation complexity compared to the traditional single stage, where one particular managed object is responsible for all the functions. The approach we propose in this thesis has the flexibility, scalability, adaptability, reproducibility and extensibility needed to assist in the identification and remediation of many future network challenges

    Malware Propagation in Online Social Networks: Modeling, Analysis and Real-world Implementations

    Get PDF
    The popularity and wide spread usage of online social networks (OSNs) have attracted hackers and cyber criminals to use OSNs as an attack platform to spread malware. Over the last few years, Facebook users have experienced hundreds of malware attacks. A successful attack can lead to tens of millions of OSN accounts being compromised and computers being infected. Cyber criminals can mount massive denial of service attacks against Internet infrastructures or systems using compromised accounts and computers. Malware infecting a user's computer have the ability to steal login credentials and other confidential information stored on the computer, install ransomware and infect other computers on the same network. Therefore, it is important to understand propagation dynamics of malware in OSNs in order to detect, contain and remove them as early as possible. The objective of this dissertation is thus to model and study propagation dynamics of various types of malware in social networks such as Facebook, LinkedIn and Orkut. In particular, - we propose analytical models that characterize propagation dynamics of cross-site scripting and Trojan malware, the two major types of malware propagating in OSNs. Our models assume the topological characteristics of real-world social networks, namely, low average shortest distance, power-law distribution of node degrees and high clustering coefficient. The proposed models were validated using a real-world social network graph. - we present the design and implementation of a cellular botnet named SoCellBot that uses the OSN platform as a means to recruit and control cellular bots on smartphones. SoCellBot utilizes OSN messaging systems as communication channels between bots. We then present a simulation-based analysis of the botnet's strategies to maximize the number of infected victims within a short amount of time and, at the same time, minimize the risk of being detected. - we describe and analyze emerging malware threats in OSNs, namely, clickjacking, extension-based and Magnet malware. We discuss their implementations and working mechanics, and analyze their propagation dynamics via simulations. - we evaluate the performance of several selective monitoring schemes used for malware detection in OSNs. With selective monitoring, we select a set of important users in the network and monitor their and their friends activities and posts for malware threats. These schemes differ in how the set of important users is selected. We evaluate and compare the effectiveness of several selective monitoring schemes in terms of malware detection in OSNs

    Topology-Aware Vulnerability Mitigation Worms

    Get PDF
    In very dynamic Information and Communication Technology (ICT) infrastructures, with rapidly growing applications, malicious intrusions have become very sophisticated, effective, and fast. Industries have suffered billions of US dollars losses due only to malicious worm outbreaks. Several calls have been issued by governments and industries to the research community to propose innovative solutions that would help prevent malicious breaches, especially with enterprise networks becoming more complex, large, and volatile. In this thesis we approach self-replicating, self-propagating, and self-contained network programs (i.e. worms) as vulnerability mitigation mechanisms to eliminate threats to networks. These programs provide distinctive features, including: Short distance communication with network nodes, intermittent network node vulnerability probing, and network topology discovery. Such features become necessary, especially for networks with frequent node association and disassociation, dynamically connected links, and where hosts concurrently run multiple operating systems. We propose -- to the best of our knowledge -- the first computer worm that utilize the second layer of the OSI model (Data Link Layer) as its main propagation medium. We name our defensive worm Seawave, a controlled interactive, self-replicating, self-propagating, and self-contained vulnerability mitigation mechanism. We develop, experiment, and evaluate Seawave under different simulation environments that mimic to a large extent enterprise networks. We also propose a threat analysis model to help identify weaknesses, strengths, and threats within and towards our vulnerability mitigation mechanism, followed by a mathematical propagation model to observe Seawave's performance under large scale enterprise networks. We also preliminary propose another vulnerability mitigation worm that utilizes the Link Layer Discovery Protocol (LLDP) for its propagation, along with an evaluation of its performance. In addition, we describe a preliminary taxonomy that rediscovers the relationship between different types of self-replicating programs (i.e. viruses, worms, and botnets) and redefines these programs based on their properties. The taxonomy provides a classification that can be easily applied within the industry and the research community and paves the way for a promising research direction that would consider the defensive side of self-replicating programs

    Macro- and microscopic analysis of the internet economy from network measurements

    Get PDF
    The growth of the Internet impacts multiple areas of the world economy, and it has become a permanent part of the economic landscape both at the macro- and at microeconomic level. On-line traffic and information are currently assets with large business value. Even though commercial Internet has been a part of our lives for more than two decades, its impact on global, and everyday, economy still holds many unknowns. In this work we analyse important macro- and microeconomic aspects of the Internet. First we investigate the characteristics of the interdomain traffic, which is an important part of the macroscopic economy of the Internet. Finally, we investigate the microeconomic phenomena of price discrimination in the Internet. At the macroscopic level, we describe quantitatively the interdomain traffic matrix (ITM), as seen from the perspective of a large research network. The ITM describes the traffic flowing between autonomous systems (AS) in the Internet. It depicts the traffic between the largest Internet business entities, therefore it has an important impact on the Internet economy. In particular, we analyse the sparsity and statistical distribution of the traffic, and observe that the shape of the statistical distribution of the traffic sourced from an AS might be related to congestion within the network. We also investigate the correlations between rows in the ITM. Finally, we propose a novel method to model the interdomain traffic, that stems from first-principles and recognizes the fact that the traffic is a mixture of different Internet applications, and can have regional artifacts. We present and evaluate a tool to generate such matrices from open and available data. Our results show that our first-principles approach is a promising alternative to the existing solutions in this area, which enables the investigation of what-if scenarios and their impact on the Internet economy. At the microscopic level, we investigate the rising phenomena of price discrimination (PD). We find empirical evidences that Internet users can be subject to price and search discrimination. In particular, we present examples of PD on several ecommerce websites and uncover the information vectors facilitating PD. Later we show that crowd-sourcing is a feasible method to help users to infer if they are subject to PD. We also build and evaluate a system that allows any Internet user to examine if she is subject to PD. The system has been deployed and used by multiple users worldwide, and uncovered more examples of PD. The methods presented in the following papers are backed with thorough data analysis and experiments.Internet es hoy en día un elemento crucial en la economía mundial, su constante crecimiento afecta directamente múltiples aspectos tanto a nivel macro- como a nivel microeconómico. Entre otros aspectos, el tráfico de red y la información que transporta se han convertido en un producto de gran valor comercial para cualquier empresa. Sin embargo, más de dos decadas después de su introducción en nuestras vidas y siendo un elemento de vital importancia, el impacto de Internet en la economía global y diaria es un tema que alberga todavía muchas incógnitas que resolver. En esta disertación analizamos importantes aspectos micro y macroeconómicos de Internet. Primero, investigamos las características del tráfico entre Sistemas Autónomos (AS), que es un parte decisiva de la macroeconomía de Internet. A continuacin, estudiamos el controvertido fenómeno microeconómico de la discriminación de precios en Internet. A nivel macroeconómico, mostramos cuantitatívamente la matriz del tráfico entre AS ("Interdomain Traffic Matrix - ITM"), visto desde la perspectiva de una gran red científica. La ITM obtenida empíricamente muestra la cantidad de tráfico compartido entre diferentes AS, las entidades más grandes en Internet, siendo esto uno de los principales aspectos a evaluar en la economiá de Internet. Esto nos permite por ejemplo, analizar diferentes propiedades estadísticas del tráfico para descubrir si la distribución del tráfico producido por un AS está directamente relacionado con la congestión dentro de la red. Además, este estudio también nos permite investigar las correlaciones entre filas de la ITM, es decir, entre diferentes AS. Por último, basándonos en el estudio empírico, proponemos una innovadora solución para modelar el tráfico en una ITM, teniendo en cuenta que el tráfico modelado es dependiente de las particularidades de cada escenario (e.g., distribución de apliaciones, artefactos). Para obtener resultados representativos, la herramienta propuesta para crear estas matrices es evaluada a partir de conjuntos de datos abiertos, disponibles para toda la comunidad científica. Los resultados obtenidos muestran que el método propuesto es una prometedora alternativa a las soluciones de la literatura. Permitiendo así, la nueva investigación de escenarios desconocidos y su impacto en la economía de Internet. A nivel microeconómico, en esta tesis investigamos el fenómeno de la discriminación de precios en Internet ("price discrimination" - PD). Nuestros estudios permiten mostrar pruebas empíricas de que los usuarios de Internet están expuestos a discriminación de precios y resultados de búsquedas. En particular, presentamos ejemplos de PD en varias páginas de comercio electrónico y descubrimos que informacin usan para llevarlo a cabo. Posteriormente, mostramos como una herramienta crowdsourcing puede ayudar a la comunidad de usuarios a inferir que páginas aplican prácticas de PD. Con el objetivo de mitigar esta cada vez más común práctica, publicamos y evaluamos una herramienta que permite al usuario deducir si está siendo víctima de PD. Esta herramienta, con gran repercusión mediática, ha sido usada por multitud de usuarios alrededor del mundo, descubriendo así más ejemplos de discriminación. Por último remarcar que todos los metodos presentados en esta disertación están respaldados por rigurosos análisis y experimentos

    A structured approach to malware detection and analysis in digital forensics investigation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirement for the degree of PhDWithin the World Wide Web (WWW), malware is considered one of the most serious threats to system security with complex system issues caused by malware and spam. Networks and systems can be accessed and compromised by various types of malware, such as viruses, worms, Trojans, botnet and rootkits, which compromise systems through coordinated attacks. Malware often uses anti-forensic techniques to avoid detection and investigation. Moreover, the results of investigating such attacks are often ineffective and can create barriers for obtaining clear evidence due to the lack of sufficient tools and the immaturity of forensics methodology. This research addressed various complexities faced by investigators in the detection and analysis of malware. In this thesis, the author identified the need for a new approach towards malware detection that focuses on a robust framework, and proposed a solution based on an extensive literature review and market research analysis. The literature review focussed on the different trials and techniques in malware detection to identify the parameters for developing a solution design, while market research was carried out to understand the precise nature of the current problem. The author termed the new approaches and development of the new framework the triple-tier centralised online real-time environment (tri-CORE) malware analysis (TCMA). The tiers come from three distinctive phases of detection and analysis where the entire research pattern is divided into three different domains. The tiers are the malware acquisition function, detection and analysis, and the database operational function. This framework design will contribute to the field of computer forensics by making the investigative process more effective and efficient. By integrating a hybrid method for malware detection, associated limitations with both static and dynamic methods are eliminated. This aids forensics experts with carrying out quick, investigatory processes to detect the behaviour of the malware and its related elements. The proposed framework will help to ensure system confidentiality, integrity, availability and accountability. The current research also focussed on a prototype (artefact) that was developed in favour of a different approach in digital forensics and malware detection methods. As such, a new Toolkit was designed and implemented, which is based on a simple architectural structure and built from open source software that can help investigators develop the skills to critically respond to current cyber incidents and analyses
    corecore