520 research outputs found

    Keyed Non-Parametric Hypothesis Tests

    Full text link
    The recent popularity of machine learning calls for a deeper understanding of AI security. Amongst the numerous AI threats published so far, poisoning attacks currently attract considerable attention. In a poisoning attack the opponent partially tampers the dataset used for learning to mislead the classifier during the testing phase. This paper proposes a new protection strategy against poisoning attacks. The technique relies on a new primitive called keyed non-parametric hypothesis tests allowing to evaluate under adversarial conditions the training input's conformance with a previously learned distribution D\mathfrak{D}. To do so we use a secret key κ\kappa unknown to the opponent. Keyed non-parametric hypothesis tests differs from classical tests in that the secrecy of κ\kappa prevents the opponent from misleading the keyed test into concluding that a (significantly) tampered dataset belongs to D\mathfrak{D}.Comment: Paper published in NSS 201
    • …
    corecore