24 research outputs found
Modeling User Search-Behavior for Masquerade Detection
Masquerade attacks are a common security problem that is a consequence of identity theft. Prior work has focused on user command modeling to identify abnormal behavior indicative of impersonation. This paper extends prior work by modeling user search behavior to detect deviations indicating a masquerade attack. We hypothesize that each individual user knows their own file system well enough to search in a limited, targeted and unique fashion in order to find information germane to their current task. Masqueraders, on the other hand, will likely not know the file system and layout of another user's desktop, and would likely search more extensively and broadly in a manner that is different than the victim user being impersonated. We extend prior research by devising taxonomies of UNIX commands and Windows applications that are used to abstract sequences of user commands and actions. The experimental results show that modeling search behavior reliably detects all masqueraders with a very low false positive rate of 0.13%, far better than prior published results. The limited set of features used for search behavior modeling also results in large performance gains over the same modeling techniques that use larger sets of features
Recommended from our members
Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems
This paper presents one-class Hellinger distance-based and one-class SVM modeling techniques that use a set of features to reveal user intent. The specific objective is to model user command profiles and detect deviations indicating a masquerade attack. The approach aims to model user intent, rather than only modeling sequences of user issued commands. We hypothesize that each individual user will search in a targeted and limited fashion in order to find information germane to their current task. Masqueraders, on the other hand, will likely not know the file system and layout of another user's desktop, and would likely search more extensively and broadly. Hence, modeling a user search behavior to detect deviations may more accurately detect masqueraders. To that end, we extend prior research that uses UNIX command sequences issued by users as the audit source by relying upon an abstraction of commands. We devised a taxonomy of UNIX commands that is used to abstract command sequences. The experimental results show that the approach does not lose information and performs comparably to or slightly better than the modeling approach based on simple UNIX command frequencies
Masquerade Attack Detection Using a Search-Behavior Modeling Approach
Masquerade attacks are unfortunately a familiar security problem that is a consequence of identity theft. Detecting masqueraders is very hard. Prior work has focused on user command modeling to identify abnormal behavior indicative of impersonation. This paper extends prior work by presenting one-class Hellinger distance-based and one-class SVM modeling techniques that use a set of novel features to reveal user intent. The specific objective is to model user search profiles and detect deviations indicating a masquerade attack. We hypothesize that each individual user knows their own file system well enough to search in a limited, targeted and unique fashion in order to find information germane to their current task. Masqueraders, on the other hand, will likely not know the file system and layout of another user's desktop, and would likely search more extensively and broadly in a manner that is different than the victim user being impersonated. We extend prior research that uses UNIX command sequences issued by users as the audit source by relying upon an abstraction of commands. We devise taxonomies of UNIX commands and Windows applications that are used to abstract sequences of user commands and actions. We also gathered our own normal and masquerader data sets captured in a Windows environment for evaluation. The datasets are publicly available for other researchers who wish to study masquerade attack rather than author identification as in much of the prior reported work. The experimental results show that modeling search behavior reliably detects all masqueraders with a very low false positive rate of 0.1%, far better than prior published results. The limited set of features used for search behavior modeling also results in huge performance gains over the same modeling techniques that use larger sets of features
Masquerade Detection Based On UNIX Commands
In this paper, we consider the problem of masquerade detection based on a UNIX system. A masquerader is an intruder who tries to remain undetected by impersonating a legitimate user. Masquerade detection is a special case of the general intrusion detection problem. We have collected data from a large number of users. This data includes infor- mation on user commands and a variety of other aspects of user behavior that can be used to construct a profile of a given user. Hidden Markov models have been used to train user profiles, and the various attack strategies have been analyzed. The results are compared to a standard dataset that offers a more limited view of user behavior
Modeling User Search Behavior for Masquerade Detection
Masquerade attacks are a common security problem that is a consequence of identity theft. This paper extends prior work by modeling user search behavior to detect deviations indicating a masquerade attack. We hypothesize that each individual user knows their own file system well enough to search in a limited, targeted and unique fashion in order to find information germane to their current task. Masqueraders, on the other hand, will likely not know the file system and layout of another user's desktop, and would likely search more extensively and broadly in a manner that is different than the victim user being impersonated. We identify actions linked to search and information access activities, and use them to build user models. The experimental results show that modeling search behavior reliably detects all masqueraders with a very low false positive rate of 1.1%, far better than prior published results. The limited set of features used for search behavior modeling also results in large performance gains over the same modeling techniques that use larger sets of features
Dueling-HMM Analysis on Masquerade Detection
Masquerade detection is the ability to detect attackers known as masqueraders that intrude on another user’s system and pose as legitimate users. Once a masquerader obtains access to a user’s system, the masquerader has free reign over whatever data is on that system. In this research, we focus on masquerade detection and user classi cation using the following two di erent approaches: the heavy hitter approach and 2 di erent approaches based on hidden Markov models (HMMs), the dueling-HMM and threshold-HMM strategies.
The heavy hitter approach computes the frequent elements seen in the training data sequence and test data sequence and computes the distance to see whether the test data sequence is masqueraded or not. The results show very misleading classi cations, suggesting that the approach is not viable for masquerade detection.
A hidden Markov model is a tool for representing probability distributions over sequences of observations [9]. Previous research has shown that using a threshold-based hidden Markov model (HMM) approach is successful in a variety of categories: malware detection, intrusion detection, pattern recognition, etc. We have veri ed that using a threshold-based HMM approach produces high accuracy with low amounts of a false positives. Using the dueling- HMM approach, which utilizes multiple training HMMs, we obtain an overall accuracy of 81.96%. With the introduction of the bias in the dueling-HMM approach, we produce similar results to the results obtained in the threshold-based HMM approach, where we see many non-masqueraded data detected, while many masqueraded data avoid detection, yet still result in an high overall accuracy
Recommended from our members
Towards Effective Masquerade Attack Detection
Data theft has been the main goal of the cybercrime community for many years, and more and more so as the cybercrime community gets more motivated by financial gain establishing a thriving underground economy. Masquerade attacks are a common security problem that is a consequence of identity theft and that is generally motivated by data theft. Such attacks are characterized by a system user illegitimately posing as another legitimate user. Prevention-focused solutions such as access control solutions and Data Loss Prevention tools have failed in preventing these attacks, making detection not a mere desideratum, but rather a necessity. Detecting masqueraders, however, is very hard. Prior work has focused on user command modeling to identify abnormal behavior indicative of impersonation. These approaches suffered from high miss and false positive rates. None of these approaches could be packaged into an easily-deployable, privacy-preserving, and effective masquerade attack detector. In this thesis, I present a machine learning-based technique using a set of novel features that aim to reveal user intent. I hypothesize that each individual user knows his or her own file system well enough to search in a limited, targeted, and unique fashion in order to find information germane to their current task. Masqueraders, on the other hand, are not likely to know the file system and layout of another user's desktop, and would likely search more extensively and broadly in a manner that is different from that of the victim user being impersonated. Based on this assumption, I model a user's search behavior and monitor deviations from it that could indicate fraudulent behavior. I identify user search events using a taxonomy of Windows applications, DLLs, and user commands. The taxonomy abstracts the user commands and actions and enriches them with contextual information. Experimental results show that modeling search behavior reliably detects all simulated masquerade activity with a very low false positive rate of 1.12%, far better than any previously published results. The limited set of features used for search behavior modeling also results in considerable performance gains over the same modeling techniques that use larger sets of features, both during sensor training and deployment. While an anomaly- or profiling-based detection approach, such as the one used in the user search profiling sensor, has the advantage of detecting unknown attacks and fraudulent masquerade behaviors, it suffers from a relatively high number of false positives and remains potentially vulnerable to mimicry attacks. To further improve the accuracy of the user search profiling approach, I supplement it with a trap-based detection approach. I monitor user actions directed at decoy documents embedded in the user's local file system. The decoy documents, which contain enticing information to the attacker, are known to the legitimate user of the system, and therefore should not be touched by him or her. Access to these decoy files, therefore, should highly suggest the presence of a masquerader. A decoy document access sensor detects any action that requires loading the decoy document into memory such as reading the document, copying it, or zipping it. I conducted human subject studies to investigate the deployment-related properties of decoy documents and to determine how decoys should be strategically deployed in a file system in order to maximize their masquerade detection ability. Our user study results show that effective deployment of decoys allows for the detection of all masquerade activity within ten minutes of its onset at most. I use the decoy access sensor as an oracle for the user search profiling sensor. If abnormal search behavior is detected, I hypothesize that suspicious activity is taking place and validate the hypothesis by checking for accesses to decoy documents. Combining the two sensors and detection techniques reduces the false positive rate to 0.77%, and hardens the sensor against mimicry attacks. The overall sensor has very limited resource requirements (40 KB) and does not introduce any noticeable delay to the user when performing its monitoring actions. Finally, I seek to expand the search behavior profiling technique to detect, not only malicious masqueraders, but any other system users. I propose a diversified and personalized user behavior profiling approach to improve the accuracy of user behavior models. The ultimate goal is to augment existing computer security features such as passwords with user behavior models, as behavior information is not readily available to be stolen and its use could substantially raise the bar for malefactors seeking to perpetrate masquerade attacks
Cloud Computing Security, An Intrusion Detection System for Cloud Computing Systems
Cloud computing is widely considered as an attractive service model because it minimizes investment since its costs are in direct relation to usage and demand. However, the distributed nature of cloud computing environments, their massive resource aggregation, wide user access and efficient and automated sharing of resources enable intruders to exploit clouds for their advantage. To combat intruders, several security solutions for cloud environments adopt Intrusion Detection Systems. However, most IDS solutions are not suitable for cloud environments, because of problems such as single point of failure, centralized load, high false positive alarms, insufficient coverage for attacks, and inflexible design. The thesis defines a framework for a cloud based IDS to face the deficiencies of current IDS technology. This framework deals with threats that exploit vulnerabilities to attack the various service models of a cloud system. The framework integrates behaviour based and knowledge based techniques to detect masquerade, host, and network attacks and provides efficient deployments to detect DDoS attacks.
This thesis has three main contributions. The first is a Cloud Intrusion Detection Dataset (CIDD) to train and test an IDS. The second is the Data-Driven Semi-Global Alignment, DDSGA, approach and three behavior based strategies to detect masquerades in cloud systems. The third and final contribution is signature based detection. We introduce two deployments, a distributed and a centralized one to detect host, network, and DDoS attacks. Furthermore, we discuss the integration and correlation of alerts from any component to build a summarized attack report. The thesis describes in details and experimentally evaluates the proposed IDS and alternative deployments.
Acknowledgment:
===============
• This PH.D. is achieved through an international joint program with a collaboration between University of Pisa in Italy (Department of Computer Science, Galileo Galilei PH.D. School) and University of Arizona in USA (College of Electrical and Computer Engineering).
• The PHD topic is categorized in both Computer Engineering and Information Engineering topics.
• The thesis author is also known as "Hisham A. Kholidy"