3,167 research outputs found

    Deep Learning and Dempster-Shafer Theory Based Insider Threat Detection

    Get PDF
    Organizations' own personnel now have a greater ability than ever before to misuse their access to critical organizational assets. Insider threat detection is a key component in identifying rare anomalies in context, which is a growing concern for many organizations. Existing perimeter security mechanisms are proving to be ineffective against insider threats. As a prospective filter for the human analysts, a new deep learning based insider threat detection method that uses the Dempster-Shafer theory is proposed to handle both accidental as well as intentional insider threats via organization's channels of communication in real time. The long short-term memory (LSTM) architecture is applied to a recurrent neural network (RNN) in this work to detect anomalous network behavior patterns. Furthermore, belief is updated with Dempster's conditional rule and utilized to fuse evidence to achieve enhanced prediction. The CERT Insider Threat Dataset v6.2 is used to train the behavior model. Through performance evaluation, our proposed method is proven to be effective as an insider threat detection technique

    Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics

    Full text link
    Mouse dynamics is a potential means of authenticating users. Typically, the authentication process is based on classical machine learning techniques, but recently, deep learning techniques have been introduced for this purpose. Although prior research has demonstrated how machine learning and deep learning algorithms can be bypassed by carefully crafted adversarial samples, there has been very little research performed on the topic of behavioural biometrics in the adversarial domain. In an attempt to address this gap, we built a set of attacks, which are applications of several generative approaches, to construct adversarial mouse trajectories that bypass authentication models. These generated mouse sequences will serve as the adversarial samples in the context of our experiments. We also present an analysis of the attack approaches we explored, explaining their limitations. In contrast to previous work, we consider the attacks in a more realistic and challenging setting in which an attacker has access to recorded user data but does not have access to the authentication model or its outputs. We explore three different attack strategies: 1) statistics-based, 2) imitation-based, and 3) surrogate-based; we show that they are able to evade the functionality of the authentication models, thereby impacting their robustness adversely. We show that imitation-based attacks often perform better than surrogate-based attacks, unless, however, the attacker can guess the architecture of the authentication model. In such cases, we propose a potential detection mechanism against surrogate-based attacks.Comment: Accepted in 2019 International Joint Conference on Neural Networks (IJCNN). Update of DO
    • …
    corecore