235 research outputs found

    Obfuscation of Malicious Behaviors for Thwarting Masquerade Detection Systems Based on Locality Features

    Get PDF
    In recent years, dynamic user verification has become one of the basic pillars for insider threat detection. From these threats, the research presented in this paper focuses on masquerader attacks, a category of insiders characterized by being intentionally conducted by persons outside the organization that somehow were able to impersonate legitimate users. Consequently, it is assumed that masqueraders are unaware of the protected environment within the targeted organization, so it is expected that they move in a more erratic manner than legitimate users along the compromised systems. This feature makes them susceptible to being discovered by dynamic user verification methods based on user profiling and anomaly-based intrusion detection. However, these approaches are susceptible to evasion through the imitation of the normal legitimate usage of the protected system (mimicry), which is being widely exploited by intruders. In order to contribute to their understanding, as well as anticipating their evolution, the conducted research focuses on the study of mimicry from the standpoint of an uncharted terrain: the masquerade detection based on analyzing locality traits. With this purpose, the problem is widely stated, and a pair of novel obfuscation methods are introduced: locality-based mimicry by action pruning and locality-based mimicry by noise generation. Their modus operandi, effectiveness, and impact are evaluated by a collection of well-known classifiers typically implemented for masquerade detection. The simplicity and effectiveness demonstrated suggest that they entail attack vectors that should be taken into consideration for the proper hardening of real organizations

    A Covert Data Transport Protocol

    Full text link
    Both enterprise and national firewalls filter network connections. For data forensics and botnet removal applications, it is important to establish the information source. In this paper, we describe a data transport layer which allows a client to transfer encrypted data that provides no discernible information regarding the data source. We use a domain generation algorithm (DGA) to encode AES encrypted data into domain names that current tools are unable to reliably differentiate from valid domain names. The domain names are registered using (free) dynamic DNS services. The data transmission format is not vulnerable to Deep Packet Inspection (DPI).Comment: 8 pages, 10 figures, conferenc

    SoK: Making Sense of Censorship Resistance Systems

    Get PDF
    An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. Several censorship resistance systems (CRSs) have emerged to help bypass such blocks. The diversity of the censor’s attack landscape has led to an arms race, leading to a dramatic speed of evolution of CRSs. The inherent complexity of CRSs and the breadth of work in this area makes it hard to contextualize the censor’s capabilities and censorship resistance strategies. To address these challenges, we conducted a comprehensive survey of CRSs-deployed tools as well as those discussed in academic literature-to systematize censorship resistance systems by their threat model and corresponding defenses. To this end, we first sketch a comprehensive attack model to set out the censor’s capabilities, coupled with discussion on the scope of censorship, and the dynamics that influence the censor’s decision. Next, we present an evaluation framework to systematize censorship resistance systems by their security, privacy, performance and deployability properties, and show how these systems map to the attack model. We do this for each of the functional phases that we identify for censorship resistance systems: communication establishment, which involves distribution and retrieval of information necessary for a client to join the censorship resistance system; and conversation, where actual exchange of information takes place. Our evaluation leads us to identify gaps in the literature, question the assumptions at play, and explore possible mitigations

    SoK: Making Sense of Censorship Resistance Systems

    Get PDF
    An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. Several censorship resistance systems (CRSs) have emerged to help bypass such blocks. The diversity of the censor’s attack landscape has led to an arms race, leading to a dramatic speed of evolution of CRSs. The inherent complexity of CRSs and the breadth of work in this area makes it hard to contextualize the censor’s capabilities and censorship resistance strategies. To address these challenges, we conducted a comprehensive survey of CRSs-deployed tools as well as those discussed in academic literature-to systematize censorship resistance systems by their threat model and corresponding defenses. To this end, we first sketch a comprehensive attack model to set out the censor’s capabilities, coupled with discussion on the scope of censorship, and the dynamics that influence the censor’s decision. Next, we present an evaluation framework to systematize censorship resistance systems by their security, privacy, performance and deployability properties, and show how these systems map to the attack model. We do this for each of the functional phases that we identify for censorship resistance systems: communication establishment, which involves distribution and retrieval of information necessary for a client to join the censorship resistance system; and conversation, where actual exchange of information takes place. Our evaluation leads us to identify gaps in the literature, question the assumptions at play, and explore possible mitigations

    Detección de intrusiones basada en modelado de red resistente a evasión por técnicas de imitación

    Get PDF
    Los sistemas de red emergentes han traído consigo nuevas amenazas que han sofisticado sus modos de operación con el fin de pasar inadvertidos por los sistemas de seguridad, lo que ha motivado el desarrollo de sistemas de detección de intrusiones más eficaces y capaces de reconocer comportamientos anómalos. A pesar de la efectividad de estos sistemas, la investigación en este campo revela la necesidad de su adaptación constante a los cambios del entorno operativo como el principal desafío a afrontar. Esta adaptación supone mayores dificultades analíticas, en particular cuando se hace frente a amenazas de evasión mediante métodos de imitación. Dichas amenazas intentan ocultar las acciones maliciosas bajo un patrón estadístico que simula el uso normal de la red, por lo que adquieren una mayor probabilidad de evadir los sistemas defensivos. Con el fin de contribuir a su mitigación, este artículo presenta una estrategia de detección de intrusos resistente a imitación construida sobre la base de los sensores PAYL. La propuesta se basa en construir modelos de uso de la red y, a partir de ellos, analizar los contenidos binarios de la carga útil en busca de patrones atípicos que puedan evidenciar contenidos maliciosos. A diferencia de las propuestas anteriores, esta investigación supera el tradicional fortalecimiento mediante la aleatorización, aprovechando la similitud de paquetes sospechosos entre modelos legítimos y de evasión previamente construidos. Su eficacia fue evaluada en las muestras de tráfico DARPA’99 y UCM 2011, en los que se comprobó su efectividad para reconocer ataques de evasión por imitación.Emerging network systems have brought new threats that have sophisticated their modes of operation in order to go unnoticed by security systems, which has led to the development of more effective intrusion detection systems capable of recognizing anomalous behaviors. Despite the effectiveness of these systems, research in this field reveals the need for their constant adaptation to changes in the operating environment as the main challenge to face. This adaptation involves greater analytical difficulties, particularly when dealing with threats of evasion through imitation methods. These threats try to hide malicious actions under a statistical pattern that simulates the normal use of the network, so they acquire a greater probability of evading defensive systems. In order to contribute to its mitigation, this article presents an imitation-resistant intrusion detection strategy built on the basis of PAYL sensors. The proposal is based on building network usage models and, from them, analyzing the binary contents of the payload in search of atypical patterns that can show malicious content. Unlike previous proposals, this research overcomes the traditional strengthening through randomization, taking advantage of the similarity of suspicious packages to previously constructed legitimate and evasion models. Its effectiveness was evaluated in 1999 DARPA and 2011 UCM traffic samples, in which it was proven effective in recognizing imitation evasion attacks

    Design requirements for generating deceptive content to protect document repositories

    Get PDF
    For nearly 30 years, fake digital documents have been used to identify external intruders and malicious insider threats. Unfortunately, while fake files hold potential to assist in data theft detection, there is little evidence of their application outside of niche organisations and academic institutions. The barrier to wider adoption appears to be the difficulty in constructing deceptive content. The current generation of solutions principally: (1) use unrealistic random data; (2) output heavily formatted or specialised content, that is difficult to apply to other environments; (3) require users to manually build the content, which is not scalable, or (4) employ an existing production file, which creates a protection paradox. This paper introduces a set of requirements for generating automated fake file content: (1) enticing, (2) realistic, (3) minimise disruption, (4) adaptive, (5) scalable protective coverage, (6) minimise sensitive artefacts and copyright infringement, and (7) contain no distinguishable characteristics. These requirements have been drawn from literature on natural science, magical performances, human deceit, military operations, intrusion detection and previous fake file solutions. These requirements guide the design of an automated fake file content construction system, providing an opportunity for the next generation of solutions to find greater commercial application and widespread adoption

    A systematic literature review on insider threats

    Full text link
    Insider threats is the most concerned cybersecurity problem which is poorly addressed by widely used security solutions. Despite the fact that there have been several scientific publications in this area, but from our innovative study classification and structural taxonomy proposals, we argue to provide the more information about insider threats and defense measures used to counter them. While adopting the current grounded theory method for a thorough literature evaluation, our categorization's goal is to organize knowledge in insider threat research. Along with an analysis of major recent studies on detecting insider threats, the major goal of the study is to develop a classification of current types of insiders, levels of access, motivations behind it, insider profiling, security properties, and methods they use to attack. This includes use of machine learning algorithm, behavior analysis, methods of detection and evaluation. Moreover, actual incidents related to insider attacks have also been analyzed

    PTPerf: On the performance evaluation of Tor Pluggable Transports

    Full text link
    Tor, one of the most popular censorship circumvention systems, faces regular blocking attempts by censors. Thus, to facilitate access, it relies on "pluggable transports" (PTs) that disguise Tor's traffic and make it hard for the adversary to block Tor. However, these are not yet well studied and compared for the performance they provide to the users. Thus, we conduct a first comparative performance evaluation of a total of 12 PTs -- the ones currently supported by the Tor project and those that can be integrated in the future. Our results reveal multiple facets of the PT ecosystem. (1) PTs' download time significantly varies even under similar network conditions. (2) All PTs are not equally reliable. Thus, clients who regularly suffer censorship may falsely believe that such PTs are blocked. (3) PT performance depends on the underlying communication primitive. (4) PTs performance significantly depends on the website access method (browser or command-line). Surprisingly, for some PTs, website access time was even less than vanilla Tor. Based on our findings from more than 1.25M measurements, we provide recommendations about selecting PTs and believe that our study can facilitate access for users who face censorship.Comment: 25 pages, 12 figure
    corecore