54,133 research outputs found

    A Study on Verification of CCTV Image Data through Unsupervised Learning Model of Deep Learning

    Get PDF
    Abnormal behavior is called an abnormal behavior that deviates from the same normal standard as the average. The installation of public CCTVs to prevent crimes is increasing, but the crime rate is rather increasing recently. In line with this situation, artificial intelligence research using deep learning that automatically finds abnormal behavior in CCTV is increasing. Deep learning is a type of artificial intelligence designed based on artificial neural networks, and the quality of learning data is important for high accuracy in the development of artificial intelligence through deep learning. This paper verifies whether learning data for abnormal behavior detection is suitable as learning data which is being constructed using an MPED-RNN model for binary classification to determine whether there is an abnormal behavior by frame using skeleton data of a person based on an autoencoder. As a result of the experiment, the unsupervised learning-based MPED-RNN model used in this paper is not suitable for verifying images with a similar number of frames with and without abnormal behavior, such as the corresponding data, and it is judged that appropriate results can be derived only when verified with a supervised learning-based model

    Towards predictive behavior analysis for smart environments

    Get PDF
    Predictive behavior analysis allows prediction of the (human) behavior based on the analysis of historical data. Efficient approaches for predictive behavior analysis are available for scenarios with structured processes (e.g., based on ERP systems). The prediction of behavior becomes an obstacle when unstructured (decision making) processes underlie the scenario. Scenarios with unstructured processes can be found in smart environments logging sensor (event) streams such as e.g., Smart Home or Connected Cars. No efficient solutions exist to identify abnormal behavior (anomalies) in such smart environments. To provide a solution for anomaly detection in unstructured processes we suggest crossing process engineering with deep learning. Methods from process engineering allow identifying deviations while deep learning improves the robustness of anomalie detection and prediction. This conjunction is a promising approach in order to find an efficient solution

    A robust abnormal behavior detection method using convolutional neural network

    Get PDF
    A behavior is considered abnormal when it is seen as unusual under certain contexts. The definition for abnormal behavior varies depending on situations. For example, people running in a field is considered normal but is deemed abnormal if it takes place in a mall. Similarly, loitering in the alleys, fighting or pushing each other in public areas are considered abnormal under specific circumstances. Abnormal behavior detection is crucial due to the increasing crime rate in the society. If an abnormal behavior can be detected earlier, tragedies can be avoided. In recent years, deep learning has been widely applied in the computer vision field and has acquired great success for human detection. In particular, Convolutional Neural Network (CNN) has shown to have achieved state-of-the-art performance in human detection. In this paper, a CNN-based abnormal behavior detection method is presented. The proposed approach automatically learns the most discriminative characteristics pertaining to human behavior from a large pool of videos containing normal and abnormal behaviors. Since the interpretation for abnormal behavior varies across contexts, extensive experiments have been carried out to assess various conditions and scopes including crowd and single person behavior detection and recognition. The proposed method represents an end-to-end solution to deal with abnormal behavior under different conditions including variations in background, number of subjects (individual, two persons or crowd), and a range of diverse unusual human activities. Experiments on five benchmark datasets validate the performance of the proposed approach

    Insider’s Misuse Detection: From Hidden Markov Model to Deep Learning

    Get PDF
    Malicious insiders increasingly affect organizations by leaking classified data to unautho- rized entities. Detecting insiders’ misuses in computer systems is a challenging problem. In this dissertation, we propose two approaches to detect such threats: a probabilistic graph- ical model-based approach and a deep learning-based approach. We investigate the logs of computer-based activities to discover patterns of misuse. We model user’s behaviors as sequences of computer-based events. For our probabilistic graphical model-based approach, we propose an unsupervised model for insider’s misuse detection. That is, we develop Stochastic Gradient Descent method to learn Hidden Markov Models (SGD-HMM) with the goal of analyzing user log data. We propose the use of varying granularity levels to represent users’ log data: Session-based, Day-based, and Week-based. A user’s normal behavior is modeled using SGD-HMM. The model is used to detect any deviation from the normal behavior. We also propose a Sliding Window Technique (SWT) to identify malicious activity by considering the near history of the user’s activities. We evaluate the experimental results in terms of Receiver Operating Characteristic (ROC). The area under the curve (AUC) represents the model’s performance with respect to the separability of the normal and abnormal behaviors. The higher the AUC scores, the better the model’s performance. Combining SGD-HMM with SWT resulted in AUC values between 0.81 and 0.9 based on the window size. Our solution is superior to the solutions presented by other researchers. For our deep learning-based approach, we propose a supervised model for insider’s misuse detection. Our solution is based on using natural language processing with deep learning. We examine textual event logs to investigate the semantic meaning behind a user’s behavior. The proposed approaches consist of character embeddings and deep learning net- works that involve Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). We develop three deep-learning models: CNN, LSTM, and CNN-LSTM. We run a 10-fold subject-independent cross-validation procedure to evaluate the developed mod- els. Our deep learning-based approach shows promising behavior. The first model, CNN, presents a good performance of classifying normal samples with an AUC score of 0.85, false-negative rate of 29%, and false-positive rate of 26%. The second model, LSTM, shows the best performance of detecting malicious samples with an AUC score of 0.873, false-negative rate of 0%, and false-positive rate of 37%. The third model, CNN-LSTM, presents a moderate behavior of detecting both normal and insider samples with an AUC score of 0.862, false-negative rate 16%, and 17% false-positive rate. Moreover, we use our proposed approach to investigate networks with deeper and wider structures. For this, we study the impact of increasing the number of CNN or LSTM layers, nodes per layer, and both of them at the same time on the model performance. Our results indicate that machine learning approaches can be effectively deployed to detect insiders’ misuse. However, it is difficult to obtain labeled data. Furthermore, the high presence of normal behavior and limited misuse activities create a highly unbalanced data set. This impacts the performance of our models

    Learning Deep Representations of Appearance and Motion for Anomalous Event Detection

    Full text link
    We present a novel unsupervised deep learning framework for anomalous event detection in complex video scenes. While most existing works merely use hand-crafted appearance and motion features, we propose Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation (early fusion). Based on the learned representations, multiple one-class SVM models are used to predict the anomaly scores of each input, which are then integrated with a late fusion strategy for final anomaly detection. We evaluate the proposed method on two publicly available video surveillance datasets, showing competitive performance with respect to state of the art approaches.Comment: Oral paper in BMVC 201
    • …
    corecore