61 research outputs found

    Keyed Non-Parametric Hypothesis Tests

    Full text link
    The recent popularity of machine learning calls for a deeper understanding of AI security. Amongst the numerous AI threats published so far, poisoning attacks currently attract considerable attention. In a poisoning attack the opponent partially tampers the dataset used for learning to mislead the classifier during the testing phase. This paper proposes a new protection strategy against poisoning attacks. The technique relies on a new primitive called keyed non-parametric hypothesis tests allowing to evaluate under adversarial conditions the training input's conformance with a previously learned distribution D\mathfrak{D}. To do so we use a secret key κ\kappa unknown to the opponent. Keyed non-parametric hypothesis tests differs from classical tests in that the secrecy of κ\kappa prevents the opponent from misleading the keyed test into concluding that a (significantly) tampered dataset belongs to D\mathfrak{D}.Comment: Paper published in NSS 201

    Robust Loss Functions under Label Noise for Deep Neural Networks

    Full text link
    In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate approach would be to look for loss functions that are inherently noise-tolerant. For binary classification there exist theoretical results on loss functions that are robust to label noise. In this paper, we provide some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems. These results generalize the existing results on noise-tolerant loss functions for binary classification. We study some of the widely used loss functions in deep networks and show that the loss function based on mean absolute value of error is inherently robust to label noise. Thus standard back propagation is enough to learn the true classifier even under label noise. Through experiments, we illustrate the robustness of risk minimization with such loss functions for learning neural networks.Comment: Appeared in AAAI 201

    A KEY IRREGULARITY UNCOVERING SYSTEM -KEY-REVIVAL THRASHES

    Get PDF
    Keyed intervention unmasking arrangement is an application-layer organization structure of deviation exposure that extracts sundry puss severally of the haul. The source of Keyed intervention unmasking organization to hinder sophistry attacks consider incorporate the assumption of key, this personality a classified factor that determines pedigree of coordination lineaments from the charge. Our concentrate archaic on better key entirely straight economical policy’s, deictic that regulation policy leaks data with reference to it that mayhap leveraged with a raider. In our work, we dissect concentration of Keyed Intrusion Detection System in opposition to key-recovery attacks. We describe that improving of the key is especially honest when if the assailant can combine with Keyed imposition disclosure organization and gain comment with respect to perceptive requests

    Comprehensive Literature Review on Machine Learning Structures for Web Spam Classification

    Get PDF
    AbstractVarious Web spam features and machine learning structures were constantly proposed to classify Web spam in recent years. The aim of this paper was to provide a comprehensive machine learning algorithms comparison within the Web spam detection community. Several machine learning algorithms and ensemble meta-algorithms as classifiers, area under receiver operating characteristic as performance evaluation and two public available datasets (WEBSPAM-UK2006 and WEBSPAM-UK2007) were experimented in this study. The results have shown that random forest with variations of AdaBoost had achieved 0.937 in WEBSPAM-UK2006 and 0.852 in WEBSPAM-UK2007
    • …
    corecore