235 research outputs found
Obfuscation of Malicious Behaviors for Thwarting Masquerade Detection Systems Based on Locality Features
In recent years, dynamic user verification has become one of the basic pillars for insider threat detection. From these threats, the research presented in this paper focuses on masquerader attacks, a category of insiders characterized by being intentionally conducted by persons outside the organization that somehow were able to impersonate legitimate users. Consequently, it is assumed that masqueraders are unaware of the protected environment within the targeted organization, so it is expected that they move in a more erratic manner than legitimate users along the compromised systems. This feature makes them susceptible to being discovered by dynamic user verification methods based on user profiling and anomaly-based intrusion detection. However, these approaches are susceptible to evasion through the imitation of the normal legitimate usage of the protected system (mimicry), which is being widely exploited by intruders. In order to contribute to their understanding, as well as anticipating their evolution, the conducted research focuses on the study of mimicry from the standpoint of an uncharted terrain: the masquerade detection based on analyzing locality traits. With this purpose, the problem is widely stated, and a pair of novel obfuscation methods are introduced: locality-based mimicry by action pruning and locality-based mimicry by noise generation. Their modus operandi, effectiveness, and impact are evaluated by a collection of well-known classifiers typically implemented for masquerade detection. The simplicity and effectiveness demonstrated suggest that they entail attack vectors that should be taken into consideration for the proper hardening of real organizations
A Covert Data Transport Protocol
Both enterprise and national firewalls filter network connections. For data
forensics and botnet removal applications, it is important to establish the
information source. In this paper, we describe a data transport layer which
allows a client to transfer encrypted data that provides no discernible
information regarding the data source. We use a domain generation algorithm
(DGA) to encode AES encrypted data into domain names that current tools are
unable to reliably differentiate from valid domain names. The domain names are
registered using (free) dynamic DNS services. The data transmission format is
not vulnerable to Deep Packet Inspection (DPI).Comment: 8 pages, 10 figures, conferenc
SoK: Making Sense of Censorship Resistance Systems
An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. Several censorship resistance systems (CRSs) have emerged to help bypass such blocks. The diversity of the censor’s attack landscape has led to an arms race, leading to a dramatic speed of evolution of CRSs. The inherent complexity of CRSs and the breadth of work in this area makes it hard to contextualize the censor’s capabilities and censorship resistance strategies. To address these challenges, we conducted a comprehensive survey of CRSs-deployed tools as well as those discussed in academic literature-to systematize censorship resistance systems by their threat model and corresponding defenses. To this end, we first sketch a comprehensive attack model to set out the censor’s capabilities, coupled with discussion on the scope of censorship, and the dynamics that influence the censor’s decision. Next, we present an evaluation framework to systematize censorship resistance systems by their security, privacy, performance and deployability properties, and show how these systems map to the attack model. We do this for each of the functional phases that we identify for censorship resistance systems: communication establishment, which involves distribution and retrieval of information necessary for a client to join the censorship resistance system; and conversation, where actual exchange of information takes place. Our evaluation leads us to identify gaps in the literature, question the assumptions at play, and explore possible mitigations
Recommended from our members
Towards Effective Masquerade Attack Detection
Data theft has been the main goal of the cybercrime community for many years, and more and more so as the cybercrime community gets more motivated by financial gain establishing a thriving underground economy. Masquerade attacks are a common security problem that is a consequence of identity theft and that is generally motivated by data theft. Such attacks are characterized by a system user illegitimately posing as another legitimate user. Prevention-focused solutions such as access control solutions and Data Loss Prevention tools have failed in preventing these attacks, making detection not a mere desideratum, but rather a necessity. Detecting masqueraders, however, is very hard. Prior work has focused on user command modeling to identify abnormal behavior indicative of impersonation. These approaches suffered from high miss and false positive rates. None of these approaches could be packaged into an easily-deployable, privacy-preserving, and effective masquerade attack detector. In this thesis, I present a machine learning-based technique using a set of novel features that aim to reveal user intent. I hypothesize that each individual user knows his or her own file system well enough to search in a limited, targeted, and unique fashion in order to find information germane to their current task. Masqueraders, on the other hand, are not likely to know the file system and layout of another user's desktop, and would likely search more extensively and broadly in a manner that is different from that of the victim user being impersonated. Based on this assumption, I model a user's search behavior and monitor deviations from it that could indicate fraudulent behavior. I identify user search events using a taxonomy of Windows applications, DLLs, and user commands. The taxonomy abstracts the user commands and actions and enriches them with contextual information. Experimental results show that modeling search behavior reliably detects all simulated masquerade activity with a very low false positive rate of 1.12%, far better than any previously published results. The limited set of features used for search behavior modeling also results in considerable performance gains over the same modeling techniques that use larger sets of features, both during sensor training and deployment. While an anomaly- or profiling-based detection approach, such as the one used in the user search profiling sensor, has the advantage of detecting unknown attacks and fraudulent masquerade behaviors, it suffers from a relatively high number of false positives and remains potentially vulnerable to mimicry attacks. To further improve the accuracy of the user search profiling approach, I supplement it with a trap-based detection approach. I monitor user actions directed at decoy documents embedded in the user's local file system. The decoy documents, which contain enticing information to the attacker, are known to the legitimate user of the system, and therefore should not be touched by him or her. Access to these decoy files, therefore, should highly suggest the presence of a masquerader. A decoy document access sensor detects any action that requires loading the decoy document into memory such as reading the document, copying it, or zipping it. I conducted human subject studies to investigate the deployment-related properties of decoy documents and to determine how decoys should be strategically deployed in a file system in order to maximize their masquerade detection ability. Our user study results show that effective deployment of decoys allows for the detection of all masquerade activity within ten minutes of its onset at most. I use the decoy access sensor as an oracle for the user search profiling sensor. If abnormal search behavior is detected, I hypothesize that suspicious activity is taking place and validate the hypothesis by checking for accesses to decoy documents. Combining the two sensors and detection techniques reduces the false positive rate to 0.77%, and hardens the sensor against mimicry attacks. The overall sensor has very limited resource requirements (40 KB) and does not introduce any noticeable delay to the user when performing its monitoring actions. Finally, I seek to expand the search behavior profiling technique to detect, not only malicious masqueraders, but any other system users. I propose a diversified and personalized user behavior profiling approach to improve the accuracy of user behavior models. The ultimate goal is to augment existing computer security features such as passwords with user behavior models, as behavior information is not readily available to be stolen and its use could substantially raise the bar for malefactors seeking to perpetrate masquerade attacks
SoK: Making Sense of Censorship Resistance Systems
An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. Several censorship resistance systems (CRSs) have emerged to help bypass such blocks. The diversity of the censor’s attack landscape has led to an arms race, leading to a dramatic speed of evolution of CRSs. The inherent complexity of CRSs and the breadth of work in this area makes it hard to contextualize the censor’s capabilities and censorship resistance strategies. To address these challenges, we conducted a comprehensive survey of CRSs-deployed tools as well as those discussed in academic literature-to systematize censorship resistance systems by their threat model and corresponding defenses. To this end, we first sketch a comprehensive attack model to set out the censor’s capabilities, coupled with discussion on the scope of censorship, and the dynamics that influence the censor’s decision. Next, we present an evaluation framework to systematize censorship resistance systems by their security, privacy, performance and deployability properties, and show how these systems map to the attack model. We do this for each of the functional phases that we identify for censorship resistance systems: communication establishment, which involves distribution and retrieval of information necessary for a client to join the censorship resistance system; and conversation, where actual exchange of information takes place. Our evaluation leads us to identify gaps in the literature, question the assumptions at play, and explore possible mitigations
Detección de intrusiones basada en modelado de red resistente a evasión por técnicas de imitación
Los sistemas de red emergentes han traído consigo nuevas amenazas que han
sofisticado sus modos de operación con el fin de pasar inadvertidos por los sistemas de seguridad,
lo que ha motivado el desarrollo de sistemas de detección de intrusiones más eficaces y
capaces de reconocer comportamientos anómalos. A pesar de la efectividad de estos sistemas,
la investigación en este campo revela la necesidad de su adaptación constante a los cambios
del entorno operativo como el principal desafío a afrontar. Esta adaptación supone mayores
dificultades analíticas, en particular cuando se hace frente a amenazas de evasión mediante
métodos de imitación. Dichas amenazas intentan ocultar las acciones maliciosas bajo un
patrón estadístico que simula el uso normal de la red, por lo que adquieren una mayor probabilidad
de evadir los sistemas defensivos. Con el fin de contribuir a su mitigación, este artículo
presenta una estrategia de detección de intrusos resistente a imitación construida sobre la
base de los sensores PAYL. La propuesta se basa en construir modelos de uso de la red y, a
partir de ellos, analizar los contenidos binarios de la carga útil en busca de patrones atípicos
que puedan evidenciar contenidos maliciosos. A diferencia de las propuestas anteriores, esta
investigación supera el tradicional fortalecimiento mediante la aleatorización, aprovechando
la similitud de paquetes sospechosos entre modelos legítimos y de evasión previamente construidos.
Su eficacia fue evaluada en las muestras de tráfico DARPA’99 y UCM 2011, en los
que se comprobó su efectividad para reconocer ataques de evasión por imitación.Emerging network systems have brought new threats that have sophisticated
their modes of operation in order to go unnoticed by security systems, which has led to the
development of more effective intrusion detection systems capable of recognizing anomalous
behaviors. Despite the effectiveness of these systems, research in this field reveals the need for
their constant adaptation to changes in the operating environment as the main challenge to
face. This adaptation involves greater analytical difficulties, particularly when dealing with
threats of evasion through imitation methods. These threats try to hide malicious actions
under a statistical pattern that simulates the normal use of the network, so they acquire a
greater probability of evading defensive systems. In order to contribute to its mitigation,
this article presents an imitation-resistant intrusion detection strategy built on the basis of
PAYL sensors. The proposal is based on building network usage models and, from them,
analyzing the binary contents of the payload in search of atypical patterns that can show
malicious content. Unlike previous proposals, this research overcomes the traditional strengthening
through randomization, taking advantage of the similarity of suspicious packages
to previously constructed legitimate and evasion models. Its effectiveness was evaluated in
1999 DARPA and 2011 UCM traffic samples, in which it was proven effective in recognizing
imitation evasion attacks
Design requirements for generating deceptive content to protect document repositories
For nearly 30 years, fake digital documents have been used to identify external intruders and malicious insider threats. Unfortunately, while fake files hold potential to assist in data theft detection, there is little evidence of their application outside of niche organisations and academic institutions. The barrier to wider adoption appears to be the difficulty in constructing deceptive content. The current generation of solutions principally: (1) use unrealistic random data; (2) output heavily formatted or specialised content, that is difficult to apply to other environments; (3) require users to manually build the content, which is not scalable, or (4) employ an existing production file, which creates a protection paradox. This paper introduces a set of requirements for generating automated fake file content: (1) enticing, (2) realistic, (3) minimise disruption, (4) adaptive, (5) scalable protective coverage, (6) minimise sensitive artefacts and copyright infringement, and (7) contain no distinguishable characteristics. These requirements have been drawn from literature on natural science, magical performances, human deceit, military operations, intrusion detection and previous fake file solutions. These requirements guide the design of an automated fake file content construction system, providing an opportunity for the next generation of solutions to find greater commercial application and widespread adoption
A systematic literature review on insider threats
Insider threats is the most concerned cybersecurity problem which is poorly
addressed by widely used security solutions. Despite the fact that there have
been several scientific publications in this area, but from our innovative
study classification and structural taxonomy proposals, we argue to provide the
more information about insider threats and defense measures used to counter
them. While adopting the current grounded theory method for a thorough
literature evaluation, our categorization's goal is to organize knowledge in
insider threat research. Along with an analysis of major recent studies on
detecting insider threats, the major goal of the study is to develop a
classification of current types of insiders, levels of access, motivations
behind it, insider profiling, security properties, and methods they use to
attack. This includes use of machine learning algorithm, behavior analysis,
methods of detection and evaluation. Moreover, actual incidents related to
insider attacks have also been analyzed
PTPerf: On the performance evaluation of Tor Pluggable Transports
Tor, one of the most popular censorship circumvention systems, faces regular
blocking attempts by censors. Thus, to facilitate access, it relies on
"pluggable transports" (PTs) that disguise Tor's traffic and make it hard for
the adversary to block Tor. However, these are not yet well studied and
compared for the performance they provide to the users. Thus, we conduct a
first comparative performance evaluation of a total of 12 PTs -- the ones
currently supported by the Tor project and those that can be integrated in the
future.
Our results reveal multiple facets of the PT ecosystem. (1) PTs' download
time significantly varies even under similar network conditions. (2) All PTs
are not equally reliable. Thus, clients who regularly suffer censorship may
falsely believe that such PTs are blocked. (3) PT performance depends on the
underlying communication primitive. (4) PTs performance significantly depends
on the website access method (browser or command-line). Surprisingly, for some
PTs, website access time was even less than vanilla Tor.
Based on our findings from more than 1.25M measurements, we provide
recommendations about selecting PTs and believe that our study can facilitate
access for users who face censorship.Comment: 25 pages, 12 figure
- …