16 research outputs found

    Network-aware Active Wardens in IPv6

    Get PDF
    Every day the world grows more and more dependent on digital communication. Technologies like e-mail or the World Wide Web that not so long ago were considered experimental, have first become accepted and then indispensable tools of everyday life. New communication technologies built on top of the existing ones continuously race to provide newer and better functionality. Even established communication media like books, radio, or television have become digital in an effort to avoid extinction. In this torrent of digital communication a constant struggle takes place. On one hand, people, organizations, companies and countries attempt to control the ongoing communications and subject them to their policies and laws. On the other hand, there oftentimes is a need to ensure and protect the anonymity and privacy of the very same communications. Neither side in this struggle is necessarily noble or malicious. We can easily imagine that in presence of oppressive censorship two parties might have a legitimate reason to communicate covertly. And at the same time, the use of digital communications for business, military, and also criminal purposes gives equally compelling reasons for monitoring them thoroughly. Covert channels are communication mechanisms that were never intended nor designed to carry information. As such, they are often able to act ``below\u27\u27 the notice of mechanisms designed to enforce security policies. Therefore, using covert channels it might be possible to establish a covert communication that escapes notice of the enforcement mechanism in place. Any covert channel present in digital communications offers a possibility of achieving a secret, and therefore unmonitored, communication. There have been numerous studies investigating possibilities of hiding information in digital images, audio streams, videos, etc. We turn our attention to the covert channels that exist in the digital networks themselves, that is in the digital communication protocols. Currently, one of the most ubiquitous protocols in deployment is the Internet Protocol version 4 (IPv4). Its universal presence and range make it an ideal candidate for covert channel investigation. However, IPv4 is approaching the end of its dominance as its address space nears exhaustion. This imminent exhaustion of IPv4 address space will soon force a mass migration towards Internet Protocol version 6 (IPv6) expressly designed as its successor. While the protocol itself is already over a decade old, its adoption is still in its infancy. The low acceptance of IPv6 results in an insufficient understanding of its security properties. We investigated the protocols forming the foundation of the next generation Internet, Internet Protocol version 6 (IPv6) and Internet Control Message Protocol (ICMPv6) and found numerous covert channels. In order to properly assess their capabilities and performance, we built cctool, a comprehensive covert channel tool. Finally, we considered countermeasures capable of defeating discovered covert channels. For this purpose we extended the previously existing notions of active wardens to equip them with the knowledge of the surrounding network and allow them to more effectively fulfill their role

    DNS in Computer Forensics

    Get PDF
    The Domain Name Service (DNS) is a critical core component of the global Internet and integral to the majority of corporate intranets. It provides resolution services between the human-readable name-based system addresses and the machine operable Internet Protocol (IP) based addresses required for creating network level connections. Whilst structured as a globally dispersed resilient tree data structure, from the Global and Country Code Top Level Domains (gTLD/ccTLD) down to the individual site and system leaf nodes, it is highly resilient although vulnerable to various attacks, exploits and systematic failures

    Detection of encrypted traffic generated by peer-to-peer live streaming applications using deep packet inspection

    Get PDF
    The number of applications using the peer-to-peer (P2P) networking paradigm and their popularity has substantially grown over the last decade. They evolved from the le-sharing applications to media streaming ones. Nowadays these applications commonly encrypt the communication contents or employ protocol obfuscation techniques. In this dissertation, it was conducted an investigation to identify encrypted traf c ows generated by three of the most popular P2P live streaming applications: TVUPlayer, Livestation and GoalBit. For this work, a test-bed that could simulate a near real scenario was created, and traf c was captured from a great variety of applications. The method proposed resort to Deep Packet Inspection (DPI), so we needed to analyse the payload of the packets in order to nd repeated patterns, that later were used to create a set of SNORT rules that can be used to detect key network packets generated by these applications. The method was evaluated experimentally on the test-bed created for that purpose, being shown that its accuracy is of 97% for GoalBit.A popularidade e o número de aplicações que usam o paradigma de redes par-a-par (P2P) têm crescido substancialmente na última década. Estas aplicações deixaram de serem usadas simplesmente para partilha de ficheiros e são agora usadas também para distribuir conteúdo multimédia. Hoje em dia, estas aplicações têm meios de cifrar o conteúdo da comunicação ou empregar técnicas de ofuscação directamente no protocolo. Nesta dissertação, foi realizada uma investigação para identificar fluxos de tráfego encriptados, que foram gerados por três aplicações populares de distribuição de conteúdo multimédia em redes P2P: TVUPlayer, Livestation e GoalBit. Para este trabalho, foi criada uma plataforma de testes que pretendia simular um cenário quase real, e o tráfego que foi capturado, continha uma grande variedade de aplicações. O método proposto nesta dissertação recorre à técnica de Inspecção Profunda de Pacotes (DPI), e por isso, foi necessário 21nalisar o conteúdo dos pacotes a fim de encontrar padrões que se repetissem, e que iriam mais tarde ser usados para criar um conjunto de regras SNORT para detecção de pacotes chave· na rede, gerados por estas aplicações, afim de se poder correctamente classificar os fluxos de tráfego. Após descobrir que a aplicação Livestation deixou de funcionar com P2P, apenas as duas regras criadas até esse momento foram usadas. Quanto à aplicação TVUPlayer, foram criadas várias regras a partir do tráfego gerado por ela mesma e que tiveram uma boa taxa de precisão. Várias regras foram também criadas para a aplicação GoalBit em que foram usados quatro cenários: com e sem encriptação usando a opção de transmissão tracker, e com e sem encriptação usando a opção de transmissão sem necessidade de tracker (aqui foi usado o protocolo Kademlia). O método foi avaliado experimentalmente na plataforma de testes criada para o efeito, sendo demonstrado que a precisão do conjunto de regras para a aplicação GoallBit é de 97%.Fundação para a Ciência e a Tecnologia (FCT

    Darknet as a Source of Cyber Threat Intelligence: Investigating Distributed and Reflection Denial of Service Attacks

    Get PDF
    Cyberspace has become a massive battlefield between computer criminals and computer security experts. In addition, large-scale cyber attacks have enormously matured and became capable to generate, in a prompt manner, significant interruptions and damage to Internet resources and infrastructure. Denial of Service (DoS) attacks are perhaps the most prominent and severe types of such large-scale cyber attacks. Furthermore, the existence of widely available encryption and anonymity techniques greatly increases the difficulty of the surveillance and investigation of cyber attacks. In this context, the availability of relevant cyber monitoring is of paramount importance. An effective approach to gather DoS cyber intelligence is to collect and analyze traffic destined to allocated, routable, yet unused Internet address space known as darknet. In this thesis, we leverage big darknet data to generate insights on various DoS events, namely, Distributed DoS (DDoS) and Distributed Reflection DoS (DRDoS) activities. First, we present a comprehensive survey of darknet. We primarily define and characterize darknet and indicate its alternative names. We further list other trap-based monitoring systems and compare them to darknet. In addition, we provide a taxonomy in relation to darknet technologies and identify research gaps that are related to three main darknet categories: deployment, traffic analysis, and visualization. Second, we characterize darknet data. Such information could generate indicators of cyber threat activity as well as provide in-depth understanding of the nature of its traffic. Particularly, we analyze darknet packets distribution, its used transport, network and application layer protocols and pinpoint its resolved domain names. Furthermore, we identify its IP classes and destination ports as well as geo-locate its source countries. We further investigate darknet-triggered threats. The aim is to explore darknet inferred threats and categorize their severities. Finally, we contribute by exploring the inter-correlation of such threats, by applying association rule mining techniques, to build threat association rules. Specifically, we generate clusters of threats that co-occur targeting a specific victim. Third, we propose a DDoS inference and forecasting model that aims at providing insights to organizations, security operators and emergency response teams during and after a DDoS attack. Specifically, this work strives to predict, within minutes, the attacks’ features, namely, intensity/rate (packets/sec) and size (estimated number of compromised machines/bots). The goal is to understand the future short-term trend of the ongoing DDoS attacks in terms of those features and thus provide the capability to recognize the current as well as future similar situations and hence appropriately respond to the threat. Further, our work aims at investigating DDoS campaigns by proposing a clustering approach to infer various victims targeted by the same campaign and predicting related features. To achieve our goal, our proposed approach leverages a number of time series and fluctuation analysis techniques, statistical methods and forecasting approaches. Fourth, we propose a novel approach to infer and characterize Internet-scale DRDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring DDoS activities using darknet, this work shows that we can extract DoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DRDoS activities such as intensity, rate and geographic location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks and the expectation maximization and k-means clustering techniques in an attempt to identify campaigns of DRDoS attacks. Finally, we conclude this work by providing some discussions and pinpointing some future work

    Cross-VM network attacks & their countermeasures within cloud computing environments

    Get PDF
    Cloud computing is a contemporary model in which the computing resources are dynamically scaled-up and scaled-down to customers, hosted within large-scale multi-tenant systems. These resources are delivered as improved, cost-effective and available upon request to customers. As one of the main trends of IT industry in modern ages, cloud computing has extended momentum and started to transform the mode enterprises build and offer IT solutions. The primary motivation in using cloud computing model is cost-effectiveness. These motivations can compel Information and Communication Technologies (ICT) organizations to shift their sensitive data and critical infrastructure on cloud environments. Because of the complex nature of underlying cloud infrastructure, the cloud environments are facing a large number of challenges of misconfigurations, cyber-attacks, root-kits, malware instances etc which manifest themselves as a serious threat to cloud environments. These threats noticeably decline the general trustworthiness, reliability and accessibility of the cloud. Security is the primary concern of a cloud service model. However, a number of significant challenges revealed that cloud environments are not as much secure as one would expect. There is also a limited understanding regarding the offering of secure services in a cloud model that can counter such challenges. This indicates the significance of the fact that what establishes the threat in cloud model. One of the main threats in a cloud model is of cost-effectiveness, normally cloud providers reduce cost by sharing infrastructure between multiple un-trusted VMs. This sharing has also led to several problems including co-location attacks. Cloud providers mitigate co-location attacks by introducing the concept of isolation. Due to this, a guest VM cannot interfere with its host machine, and with other guest VMs running on the same system. Such isolation is one of the prime foundations of cloud security for major public providers. However, such logical boundaries are not impenetrable. A myriad of previous studies have demonstrated how co-resident VMs could be vulnerable to attacks through shared file systems, cache side-channels, or through compromising of hypervisor layer using rootkits. Thus, the threat of cross-VM attacks is still possible because an attacker uses one VM to control or access other VMs on the same hypervisor. Hence, multiple methods are devised for strategic VM placement in order to exploit co-residency. Despite the clear potential for co-location attacks for abusing shared memory and disk, fine grained cross-VM network-channel attacks have not yet been demonstrated. Current network based attacks exploit existing vulnerabilities in networking technologies, such as ARP spoofing and DNS poisoning, which are difficult to use for VM-targeted attacks. The most commonly discussed network-based challenges focus on the fact that cloud providers place more layers of isolation between co-resided VMs than in non-virtualized settings because the attacker and victim are often assigned to separate segmentation of virtual networks. However, it has been demonstrated that this is not necessarily sufficient to prevent manipulation of a victim VM’s traffic. This thesis presents a comprehensive method and empirical analysis on the advancement of co-location attacks in which a malicious VM can negatively affect the security and privacy of other co-located VMs as it breaches the security perimeter of the cloud model. In such a scenario, it is imperative for a cloud provider to be able to appropriately secure access to the data such that it reaches to the appropriate destination. The primary contribution of the work presented in this thesis is to introduce two innovative attack models in leading cloud models, impersonation and privilege escalation, that successfully breach the security perimeter of cloud models and also propose countermeasures that block such types of attacks. The attack model revealed in this thesis, is a combination of impersonation and mirroring. This experimental setting can exploit the network channel of cloud model and successfully redirects the network traffic of other co-located VMs. The main contribution of this attack model is to find a gap in the contemporary network cloud architecture that an attacker can exploit. Prior research has also exploited the network channel using ARP poisoning, spoofing but all such attack schemes have been countered as modern cloud providers place more layers of security features than in preceding settings. Impersonation relies on the already existing regular network devices in order to mislead the security perimeter of the cloud model. The other contribution presented of this thesis is ‘privilege escalation’ attack in which a non-root user can escalate a privilege level by using RoP technique on the network channel and control the management domain through which attacker can manage to control the other co-located VMs which they are not authorized to do so. Finally, a countermeasure solution has been proposed by directly modifying the open source code of cloud model that can inhibit all such attacks

    Modélisation formelle des systèmes de détection d'intrusions

    Get PDF
    L’écosystème de la cybersécurité évolue en permanence en termes du nombre, de la diversité, et de la complexité des attaques. De ce fait, les outils de détection deviennent inefficaces face à certaines attaques. On distingue généralement trois types de systèmes de détection d’intrusions : détection par anomalies, détection par signatures et détection hybride. La détection par anomalies est fondée sur la caractérisation du comportement habituel du système, typiquement de manière statistique. Elle permet de détecter des attaques connues ou inconnues, mais génère aussi un très grand nombre de faux positifs. La détection par signatures permet de détecter des attaques connues en définissant des règles qui décrivent le comportement connu d’un attaquant. Cela demande une bonne connaissance du comportement de l’attaquant. La détection hybride repose sur plusieurs méthodes de détection incluant celles sus-citées. Elle présente l’avantage d’être plus précise pendant la détection. Des outils tels que Snort et Zeek offrent des langages de bas niveau pour l’expression de règles de reconnaissance d’attaques. Le nombre d’attaques potentielles étant très grand, ces bases de règles deviennent rapidement difficiles à gérer et à maintenir. De plus, l’expression de règles avec état dit stateful est particulièrement ardue pour reconnaître une séquence d’événements. Dans cette thèse, nous proposons une approche stateful basée sur les diagrammes d’état-transition algébriques (ASTDs) afin d’identifier des attaques complexes. Les ASTDs permettent de représenter de façon graphique et modulaire une spécification, ce qui facilite la maintenance et la compréhension des règles. Nous étendons la notation ASTD avec de nouvelles fonctionnalités pour représenter des attaques complexes. Ensuite, nous spécifions plusieurs attaques avec la notation étendue et exécutons les spécifications obtenues sur des flots d’événements à l’aide d’un interpréteur pour identifier des attaques. Nous évaluons aussi les performances de l’interpréteur avec des outils industriels tels que Snort et Zeek. Puis, nous réalisons un compilateur afin de générer du code exécutable à partir d’une spécification ASTD, capable d’identifier de façon efficiente les séquences d’événements.Abstract : The cybersecurity ecosystem continuously evolves with the number, the diversity, and the complexity of cyber attacks. Generally, we have three types of Intrusion Detection System (IDS) : anomaly-based detection, signature-based detection, and hybrid detection. Anomaly detection is based on the usual behavior description of the system, typically in a static manner. It enables detecting known or unknown attacks but also generating a large number of false positives. Signature based detection enables detecting known attacks by defining rules that describe known attacker’s behavior. It needs a good knowledge of attacker behavior. Hybrid detection relies on several detection methods including the previous ones. It has the advantage of being more precise during detection. Tools like Snort and Zeek offer low level languages to represent rules for detecting attacks. The number of potential attacks being large, these rule bases become quickly hard to manage and maintain. Moreover, the representation of stateful rules to recognize a sequence of events is particularly arduous. In this thesis, we propose a stateful approach based on algebraic state-transition diagrams (ASTDs) to identify complex attacks. ASTDs allow a graphical and modular representation of a specification, that facilitates maintenance and understanding of rules. We extend the ASTD notation with new features to represent complex attacks. Next, we specify several attacks with the extended notation and run the resulting specifications on event streams using an interpreter to identify attacks. We also evaluate the performance of the interpreter with industrial tools such as Snort and Zeek. Then, we build a compiler in order to generate executable code from an ASTD specification, able to efficiently identify sequences of events

    A framework for malicious host fingerprinting using distributed network sensors

    Get PDF
    Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area
    corecore