7 research outputs found
ToLeRating UR-STD
A new emerging paradigm of Uncertain Risk of Suspicion, Threat and Danger, observed across the field of information security, is described. Based on this paradigm a novel approach to anomaly detection is presented. Our approach is based on a simple yet powerful analogy from
the innate part of the human immune system, the Toll-Like Receptors. We argue that such receptors incorporated as part of an anomaly detector enhance the detector’s ability to distinguish normal and anomalous behaviour. In addition we propose that Toll-Like Receptors enable the classification of detected anomalies based on the types of attacks that perpetrate the anomalous behaviour. Classification of such type is either missing in existing literature or is not fit for the purpose of reducing the burden of an administrator of an intrusion detection system. For our model to work, we propose the creation of a taxonomy of the digital Acytota, based on which our receptors are created
Impact of IT Monoculture on Behavioral End Host Intrusion Detection
International audienceIn this paper, we study the impact of today's IT policies, defined based upon a monoculture approach, on the performance of endhost anomaly detectors. This approach leads to the uniform configuration of Host intrusion detection systems (HIDS) across all hosts in an enterprise networks. We assess the performance impact this policy has from the individual's point of view by analyzing network traces collected from 350 enterprise users. We uncover a great deal of diversity in the user population in terms of the “tail†behavior, i.e., the component which matters for anomaly detection systems. We demonstrate that the monoculture approach to HIDS configuration results in users that experience wildly different false positive and false negatives rates. We then introduce new policies, based upon leveraging this diversity and show that not only do they dramatically improve performance for the vast majority of users, but they also reduce the number of false positives arriving in centralized IT operation centers, and can reduce attack strength
Employing Opportunistic Diversity for Detecting Injection Attacks in Web Applications
Web-based applications are becoming increasingly popular due to less demand of client-side resources
and easier maintenance than desktop counterparts. On the other hand, larger attack surfaces
and developers’ lack of security proficiency or awareness leave Web applications particularly vulnerable
to security attacks. One existing approach to preventing security attacks is to compose
a redundant system using functionally similar but internally different variants, which will likely
respond to the same attack in different ways. However, most diversity-by-design approaches are
rarely used in practice due to the implied cost in development and maintenance, significant false
alarm rate is also another limitation. In this work, we employ opportunistic diversity inherent
to Web applications and their database backends to prevent injection attacks. We first conduct a
case study of common vulnerabilities to confirm the effectiveness of opportunistic diversity for
preventing potential attacks. We then devise a multi-stage approach to examine database queries,
their effect on the database, query results, and user-end results. Next, we combine the results
obtained from different stages using a learning-based approach to further improve the detection
accuracy. Finally, we evaluate our approach using a real world Web application
Network Security Metrics: Estimating the Resilience of Networks Against Zero Day Attacks
Computer networks are playing the role of nervous systems in many critical infrastructures, governmental and military organizations, and enterprises today. Protecting such mission critical networks means more than just patching known vulnerabilities and deploying firewalls or IDSs. Proper metrics are needed in evaluating the security level of networks and provide security enhanced solutions. However, without considering unknown zero-day vulnerabilities, security metrics are insufficient to capture the true security level of a network. My Ph.D's work is aiming to develop a series of novel network security metrics with a special focus on modeling zero day attacks and study the relationships between software features and vulnerabilities.
In the first work, we take the first step toward formally modeling network diversity as a security metric by designing and evaluating a series of diversity metrics. In particular, we first devise a biodiversity-inspired metric based on the effective number of distinct resources. We then propose two complementary diversity metrics, based on the least and the average attacking efforts, respectively.
In the second topic, we lift the attack surface concept, which calculates the intrinsic properties of software applications, to the network level as a security metric for evaluating the resilience of networks against potential zero day attacks. First, we develop models for aggregating the attack surface among different resources inside a network. Second, we design heuristic algorithms to avoid the costly calculation of attack surface.
Predicting and studying the software vulnerability both help administrators to improve security deployment for their organizations and to choose the right applications among those with similar functionality, and for the software vendors to estimate the security level of their software applications. In the third topic, we perform a large-scale empirical study on datasets from GitHub and different versions of Chrome to study the relationship between software features and the number of vulnerabilities. This study quantitatively demonstrates the importance of features in the vulnerability discovery process based on machine learning techniques, which provides inputs for network level security metrics. Those features could serve as inputs for future network security metrics
Recommended from our members
Towards Effective Masquerade Attack Detection
Data theft has been the main goal of the cybercrime community for many years, and more and more so as the cybercrime community gets more motivated by financial gain establishing a thriving underground economy. Masquerade attacks are a common security problem that is a consequence of identity theft and that is generally motivated by data theft. Such attacks are characterized by a system user illegitimately posing as another legitimate user. Prevention-focused solutions such as access control solutions and Data Loss Prevention tools have failed in preventing these attacks, making detection not a mere desideratum, but rather a necessity. Detecting masqueraders, however, is very hard. Prior work has focused on user command modeling to identify abnormal behavior indicative of impersonation. These approaches suffered from high miss and false positive rates. None of these approaches could be packaged into an easily-deployable, privacy-preserving, and effective masquerade attack detector. In this thesis, I present a machine learning-based technique using a set of novel features that aim to reveal user intent. I hypothesize that each individual user knows his or her own file system well enough to search in a limited, targeted, and unique fashion in order to find information germane to their current task. Masqueraders, on the other hand, are not likely to know the file system and layout of another user's desktop, and would likely search more extensively and broadly in a manner that is different from that of the victim user being impersonated. Based on this assumption, I model a user's search behavior and monitor deviations from it that could indicate fraudulent behavior. I identify user search events using a taxonomy of Windows applications, DLLs, and user commands. The taxonomy abstracts the user commands and actions and enriches them with contextual information. Experimental results show that modeling search behavior reliably detects all simulated masquerade activity with a very low false positive rate of 1.12%, far better than any previously published results. The limited set of features used for search behavior modeling also results in considerable performance gains over the same modeling techniques that use larger sets of features, both during sensor training and deployment. While an anomaly- or profiling-based detection approach, such as the one used in the user search profiling sensor, has the advantage of detecting unknown attacks and fraudulent masquerade behaviors, it suffers from a relatively high number of false positives and remains potentially vulnerable to mimicry attacks. To further improve the accuracy of the user search profiling approach, I supplement it with a trap-based detection approach. I monitor user actions directed at decoy documents embedded in the user's local file system. The decoy documents, which contain enticing information to the attacker, are known to the legitimate user of the system, and therefore should not be touched by him or her. Access to these decoy files, therefore, should highly suggest the presence of a masquerader. A decoy document access sensor detects any action that requires loading the decoy document into memory such as reading the document, copying it, or zipping it. I conducted human subject studies to investigate the deployment-related properties of decoy documents and to determine how decoys should be strategically deployed in a file system in order to maximize their masquerade detection ability. Our user study results show that effective deployment of decoys allows for the detection of all masquerade activity within ten minutes of its onset at most. I use the decoy access sensor as an oracle for the user search profiling sensor. If abnormal search behavior is detected, I hypothesize that suspicious activity is taking place and validate the hypothesis by checking for accesses to decoy documents. Combining the two sensors and detection techniques reduces the false positive rate to 0.77%, and hardens the sensor against mimicry attacks. The overall sensor has very limited resource requirements (40 KB) and does not introduce any noticeable delay to the user when performing its monitoring actions. Finally, I seek to expand the search behavior profiling technique to detect, not only malicious masqueraders, but any other system users. I propose a diversified and personalized user behavior profiling approach to improve the accuracy of user behavior models. The ultimate goal is to augment existing computer security features such as passwords with user behavior models, as behavior information is not readily available to be stolen and its use could substantially raise the bar for malefactors seeking to perpetrate masquerade attacks