98,182 research outputs found

    Towards Building Deep Networks with Bayesian Factor Graphs

    Full text link
    We propose a Multi-Layer Network based on the Bayesian framework of the Factor Graphs in Reduced Normal Form (FGrn) applied to a two-dimensional lattice. The Latent Variable Model (LVM) is the basic building block of a quadtree hierarchy built on top of a bottom layer of random variables that represent pixels of an image, a feature map, or more generally a collection of spatially distributed discrete variables. The multi-layer architecture implements a hierarchical data representation that, via belief propagation, can be used for learning and inference. Typical uses are pattern completion, correction and classification. The FGrn paradigm provides great flexibility and modularity and appears as a promising candidate for building deep networks: the system can be easily extended by introducing new and different (in cardinality and in type) variables. Prior knowledge, or supervised information, can be introduced at different scales. The FGrn paradigm provides a handy way for building all kinds of architectures by interconnecting only three types of units: Single Input Single Output (SISO) blocks, Sources and Replicators. The network is designed like a circuit diagram and the belief messages flow bidirectionally in the whole system. The learning algorithms operate only locally within each block. The framework is demonstrated in this paper in a three-layer structure applied to images extracted from a standard data set.Comment: Submitted for journal publicatio

    Intrusion Detection System using Bayesian Network Modeling

    Get PDF
    Computer Network Security has become a critical and important issue due to ever increasing cyber-crimes. Cybercrimes are spanning from simple piracy crimes to information theft in international terrorism. Defence security agencies and other militarily related organizations are highly concerned about the confidentiality and access control of the stored data. Therefore, it is really important to investigate on Intrusion Detection System (IDS) to detect and prevent cybercrimes to protect these systems. This research proposes a novel distributed IDS to detect and prevent attacks such as denial service, probes, user to root and remote to user attacks. In this work, we propose an IDS based on Bayesian network classification modelling technique. Bayesian networks are popular for adaptive learning, modelling diversity network traffic data for meaningful classification details. The proposed model has an anomaly based IDS with an adaptive learning process. Therefore, Bayesian networks have been applied to build a robust and accurate IDS. The proposed IDS has been evaluated against the KDD DAPRA dataset which was designed for network IDS evaluation. The research methodology consists of four different Bayesian networks as classification models, where each of these classifier models are interconnected and communicated to predict on incoming network traffic data. Each designed Bayesian network model is capable of detecting a major category of attack such as denial of service (DoS). However, all four Bayesian networks work together to pass the information of the classification model to calibrate the IDS system. The proposed IDS shows the ability of detecting novel attacks by continuing learning with different datasets. The testing dataset constructed by sampling the original KDD dataset to contain balance number of attacks and normal connections. The experiments show that the proposed system is effective in detecting attacks in the test dataset and is highly accurate in detecting all major attacks recorded in DARPA dataset. The proposed IDS consists with a promising approach for anomaly based intrusion detection in distributed systems. Furthermore, the practical implementation of the proposed IDS system can be utilized to train and detect attacks in live network traffi

    Measures of Variability for Bayesian Network Graphical Structures

    Full text link
    The structure of a Bayesian network includes a great deal of information about the probability distribution of the data, which is uniquely identified given some general distributional assumptions. Therefore it's important to study its variability, which can be used to compare the performance of different learning algorithms and to measure the strength of any arbitrary subset of arcs. In this paper we will introduce some descriptive statistics and the corresponding parametric and Monte Carlo tests on the undirected graph underlying the structure of a Bayesian network, modeled as a multivariate Bernoulli random variable. A simple numeric example and the comparison of the performance of some structure learning algorithm on small samples will then illustrate their use.Comment: 19 pages, 4 figures. arXiv admin note: substantial text overlap with arXiv:0909.168
    • …
    corecore