1,378 research outputs found

    Detection of Covert Channel Encoding in Network Packet Delays

    Get PDF
    Covert channels are mechanisms for communicating information in ways that are difficult to detect. Data exfiltration can be an indication that a computer has been compromised by an attacker even when other intrusion detection schemes have failed to detect a successful attack. Covert timing channels use packet inter-arrival times, not header or payload embedded information, to encode covert messages. This paper investigates the channel capacity of Internet-based timing channels and proposes a methodology for detecting covert timing channels based on how close a source comes to achieving that channel capacity. A statistical approach is then used for the special case of binary codes

    A program for the Bayesian Neural Network in the ROOT framework

    Full text link
    We present a Bayesian Neural Network algorithm implemented in the TMVA package, within the ROOT framework. Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29.Comment: 12 pages, 6 figure

    Equivalence Proofs for Multi-Layer Perceptron Classifiers and the Bayesian Discriminant Function

    Get PDF
    This paper presents a number of proofs that equate the outputs of a Multi-Layer Perceptron (MLP) classifier and the optimal Bayesian discriminant function for asymptotically large sets of statistically independent training samples. Two broad classes of objective functions are shown to yield Bayesian discriminant performance. The first class are “reasonable error measures,” which achieve Bayesian discriminant performance by engendering classifier outputs that asymptotically equate to a posteriori probabilities. This class includes the mean-squared error (MSE) objective function as well as a number of information theoretic objective functions. The second class are classification figures of merit (CFMmono ), which yield a qualified approximation to Bayesian discriminant performance by engendering classifier outputs that asymptotically identify themaximum a posteriori probability for a given input. Conditions and relationships for Bayesian discriminant functional equivalence are given for both classes of objective functions. Differences between the two classes are then discussed very briefly in the context of how they might affect MLP classifier generalization, given relatively small training sets

    Elementary Derivative Tasks and Neural Net Multiscale Analysis of Tasks

    Full text link
    Neural nets are known to be universal approximators. In particular, formal neurons implementing wavelets have been shown to build nets able to approximate any multidimensional task. Such very specialized formal neurons may be, however, difficult to obtain biologically and/or industrially. In this paper we relax the constraint of a strict ``Fourier analysis'' of tasks. Rather, we use a finite number of more realistic formal neurons implementing elementary tasks such as ``window'' or ``Mexican hat'' responses, with adjustable widths. This is shown to provide a reasonably efficient, practical and robust, multifrequency analysis. A training algorithm, optimizing the task with respect to the widths of the responses, reveals two distinct training modes. The first mode induces some of the formal neurons to become identical, hence promotes ``derivative tasks''. The other mode keeps the formal neurons distinct.Comment: latex neurondlt.tex, 7 files, 6 figures, 9 pages [SPhT-T01/064], submitted to Phys. Rev.
    corecore