5 research outputs found

    Enhanced Prediction of Network Attacks Using Incomplete Data

    Get PDF
    For years, intrusion detection has been considered a key component of many organizationsโ€™ network defense capabilities. Although a number of approaches to intrusion detection have been tried, few have been capable of providing security personnel responsible for the protection of a network with sufficient information to make adjustments and respond to attacks in real-time. Because intrusion detection systems rarely have complete information, false negatives and false positives are extremely common, and thus valuable resources are wasted responding to irrelevant events. In order to provide better actionable information for security personnel, a mechanism for quantifying the confidence level in predictions is needed. This work presents an approach which seeks to combine a primary prediction model with a novel secondary confidence level model which provides a measurement of the confidence in a given attack prediction being made. The ability to accurately identify an attack and quantify the confidence level in the prediction could serve as the basis for a new generation of intrusion detection devices, devices that provide earlier and better alerts for administrators and allow more proactive response to events as they are occurring

    Two-stage online inference model for traffic pattern analysis and anomaly detection

    No full text
    In this paper, we propose a method for modeling trajectory patterns with both regional and velocity observations through the probabilistic topic model. By embedding Gaussian models into the discrete topic model framework, our method uses continuous velocity as well as regional observations unlike existing approaches. In addition, the proposed framework combined with Hidden Markov Model can cover the temporal transition of the scene state, which is useful in checking a violation of the rule that some conflict topics (e.g. two cross-traffic patterns) should not occur at the same time. To achieve online learning even with the complexity of the proposed model, we suggest a novel learning scheme instead of collapsed Gibbs sampling. The proposed two-stage greedy learning scheme is not only efficient at reducing the search space but also accurate in a way that the accuracy of online learning becomes not worse than that of the batch learning. To validate the performance of our method, experiments were conducted on various datasets. Experimental results show that our model explains satisfactorily the trajectory patterns with respect to scene understanding, anomaly detection, and prediction

    ์˜์ƒ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ฒ ์ด์ง€์•ˆ ์˜ˆ์ธก ๋ฐ ํšŒ๊ท€ ๋ถ„์„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2017. 2. ์ตœ์ง„์˜.This dissertation proposes a new high dimensional regression / prediction method for diverse visual data pairs. In contrast to other regression / prediction methods, the proposed method focuses on the case where output responses are on a complex high dimensional manifold, such as images. In handling the complex data, the latent space embedding the information of the data is used for efficient regression / prediction. The dimensionality reduction methods into the latent space and the regression/prediction methods are designed as a Bayesian framework. For the prediction problem, the dissertation proposes a method to extract latent semantics on motion dynamics given in visual sequences. To this end, a Bayesian inference model is developed to capture the regional and temporal semantics of the dynamics data. The proposed Bayesian model is a hierarchical fusion of Gaussian mixture model and topic mixture model. It finds regional pattern information through topic mixture model and derives temporal co-occurrence of regional patterns through Gaussian mixture model. To infer the proposed model, the dissertation proposes a new sampling method that enables efficient inference. For the regression problem, we propose a method that makes a regression in the latent space for general and complex visual data pairs. This allows the latent space to imply the essential properties of the data pairs required for regression. For the purpose, a regression model is designed so that the regression in latent space should coincide with the regression in data space. The whole models are designed as Bayesian framework, and inferred by variational autoencoder framework.Chapter 1 Introduction 1 1.1 Background 1 1.2 Related Work 4 1.3 Contents of Research 7 1.4 Thesis Organization 8 Chapter 2 Preliminaries 11 2.1 Overview 11 2.2 Bayesian Statistics for Generative Model 11 2.2.1 Overview 11 2.2.2 Bayes Theorem 12 2.2.3 Example: Bayesian Curve Fitting Problem 13 2.2.4 Bayesian Model Comparison 15 2.2.5 Approximate Inference 18 2.3 Sampling Based Methods 19 2.3.1 Overview 19 2.3.2 Monte Carlo Method 19 2.3.3 Basic Sampling Methods 20 2.3.4 Markov Chain Monte Carlo 24 2.3.5 Gibbs Sampling 26 2.4 Optimization Based Methods 27 2.4.1 Overview 27 2.4.2 Kullback-Leibler Divergence 28 2.4.3 Variational Inference 28 2.4.4 Mean-Field Approximation 29 2.4.5 Autoencoding Variational Bayes 31 2.5 Gaussian Process Regression 33 2.5.1 Overview 33 2.5.2 Weighted Space View 33 2.5.3 Function Space View 34 Chapter 3 Prediction from Visual Data 37 3.1 Overall Scheme 37 3.2 Conversion of Input Trajectories 40 3.3 Hierarchical Topic-Gaussian Mixture Model 40 3.4 Inference of the HTGMM 44 3.5 Deterministic Method for Path Prediction 50 Chapter 4 Regression of Visual Data 55 4.1 Overall Scheme 55 4.2 Variational Autoencoded Regression 59 4.3 Model Description 61 4.4 Training 63 4.5 Implementation Detail 67 Chapter 5 Experiments 69 5.1 Visual Prediction 69 5.1.1 Dataset 69 5.1.2 Comparison Methods 70 5.1.3 Qualitative Evaluation 71 5.1.4 Quantitative Evaluation 80 5.1.5 Summary 81 5.2 Visual Regression 82 5.2.1 Dataset 82 5.2.2 Sports Data Sequences 82 5.2.3 Human Pose Reconstruction 95 5.2.4 Summary 100 Chapter 6 Conclusion 103 6.1 Contribution 103 6.2 Future work 104 Bibliography 105 ์ดˆ๋ก 119Docto

    ๋™์  ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ฐ์ดํ„ฐ ํ•™์Šต์„ ์œ„ํ•œ ์‹ฌ์ธต ํ•˜์ดํผ๋„คํŠธ์›Œํฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2015. 2. ์žฅ๋ณ‘ํƒ.Recent advancements in information communication technology has led the explosive increase of data. Dissimilar to traditional data which are structured and unimodal, in particular, the characteristics of recent data generated from dynamic environments are summarized as high-dimensionality, multimodality, and structurelessness as well as huge-scale size. The learning from non-stationary multimodal data is essential for solving many difficult problems in artificial intelligence. However, despite many successful reports, existing machine learning methods have mainly focused on solving practical problems represented by large-scaled but static databases, such as image classification, tagging, and retrieval. Hypernetworks are a probabilistic graphical model representing empirical distribution, using a hypergraph structure that is a large collection of many hyperedges encoding the associations among variables. This representation allows the model to be suitable for characterizing the complex relationships between features with a population of building blocks. However, since a hypernetwork is represented by a huge combinatorial feature space, the model requires a large number of hyperedges for handling the multimodal large-scale data and thus faces the scalability problem. In this dissertation, we propose a deep architecture of hypernetworks for dealing with the scalability issue for learning from multimodal data with non-stationary properties such as videos, i.e., deep hypernetworks. Deep hypernetworks handle the issues through the abstraction at multiple levels using a hierarchy of multiple hypergraphs. We use a stochastic method based on Monte-Carlo simulation, a graph MC, for efficiently constructing hypergraphs representing the empirical distribution of the observed data. The structure of a deep hypernetwork continuously changes as the learning proceeds, and this flexibility is contrasted to other deep learning models. The proposed model incrementally learns from the data, thus handling the nonstationary properties such as concept drift. The abstract representations in the learned models play roles of multimodal knowledge on data, which are used for the content-aware crossmodal transformation including vision-language conversion. We view the vision-language conversion as a machine translation, and thus formulate the vision-language translation in terms of the statistical machine translation. Since the knowledge on the video stories are used for translation, we call this story-aware vision-language translation. We evaluate deep hypernetworks on large-scale vision-language multimodal data including benmarking datasets and cartoon video series. The experimental results show the deep hypernetworks effectively represent visual-linguistic information abstracted at multiple levels of the data contents as well as the associations between vision and language. We explain how the introduction of a hierarchy deals with the scalability and non-stationary properties. In addition, we present the story-aware vision-language translation on cartoon videos by generating scene images from sentences and descriptive subtitles from scene images. Furthermore, we discuss the meaning of our model for lifelong learning and the improvement direction for achieving human-level artificial intelligence.1 Introduction 1.1 Background and Motivation 1.2 Problems to be Addressed 1.3 The Proposed Approach and its Contribution 1.4 Organization of the Dissertation 2 RelatedWork 2.1 Multimodal Leanring 2.2 Models for Learning from Multimodal Data 2.2.1 Topic Model-Based Multimodal Leanring 2.2.2 Deep Network-based Multimodal Leanring 2.3 Higher-Order Graphical Models 2.3.1 Hypernetwork Models 2.3.2 Bayesian Evolutionary Learning of Hypernetworks 3 Multimodal Hypernetworks for Text-to-Image Retrievals 3.1 Overview 3.2 Hypernetworks for Multimodal Associations 3.2.1 Multimodal Hypernetworks 3.2.2 Incremental Learning of Multimodal Hypernetworks 3.3 Text-to-Image Crossmodal Inference 3.3.1 Representatation of Textual-Visual Data 3.3.2 Text-to-Image Query Expansion 3.4 Text-to-Image Retrieval via Multimodal Hypernetworks 3.4.1 Data and Experimental Settings 3.4.2 Text-to-Image Retrieval Performance 3.4.3 Incremental Learning for Text-to-Image Retrieval 3.5 Summary 4 Deep Hypernetworks for Multimodal Cocnept Learning from Cartoon Videos 4.1 Overview 4.2 Visual-Linguistic Concept Representation of Catoon Videos 4.3 Deep Hypernetworks for Modeling Visual-Linguistic Concepts 4.3.1 Sparse Population Coding 4.3.2 Deep Hypernetworks for Concept Hierarchies 4.3.3 Implication of Deep Hypernetworks on Cognitive Modeling 4.4 Learning of Deep Hypernetworks 4.4.1 Problem Space of Deep Hypernetworks 4.4.2 Graph Monte-Carlo Simulation 4.4.3 Learning of Concept Layers 4.4.4 Incremental Concept Construction 4.5 Incremental Concept Construction from Catoon Videos 4.5.1 Data Description and Parameter Setup 4.5.2 Concept Representation and Development 4.5.3 Character Classification via Concept Learning 4.5.4 Vision-Language Conversion via Concept Learning 4.6 Summary 5 Story-awareVision-LanguageTranslation usingDeepConcept Hiearachies 5.1 Overview 5.2 Vision-Language Conversion as a Machine Translation 5.2.1 Statistical Machine Translation 5.2.2 Vision-Language Translation 5.3 Story-aware Vision-Language Translation using Deep Concept Hierarchies 5.3.1 Story-aware Vision-Language Translation 5.3.2 Vision-to-Language Translation 5.3.3 Language-to-Vision Translation 5.4 Story-aware Vision-Language Translation on Catoon Videos 5.4.1 Data and Experimental Setting 5.4.2 Scene-to-Sentence Generation 5.4.3 Sentence-to-Scene Generation 5.4.4 Visual-Linguistic Story Summarization of Cartoon Videos 5.5 Summary 6 Concluding Remarks 6.1 Summary of the Dissertation 6.2 Directions for Further Research Bibliography ํ•œ๊ธ€์ดˆ๋กDocto
    corecore