5,039 research outputs found

    Detecting adversarial manipulation using inductive Venn-ABERS predictors

    Get PDF
    Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretical guarantee that their predictions are perfectly calibrated. In this paper, we propose to exploit this calibration property for the detection of adversarial examples in binary classification tasks. By rejecting predictions if the uncertainty of the IVAP is too high, we obtain an algorithm that is both accurate on the original test set and resistant to adversarial examples. This robustness is observed on adversarials for the underlying model as well as adversarials that were generated by taking the IVAP into account. The method appears to offer competitive robustness compared to the state-of-the-art in adversarial defense yet it is computationally much more tractable

    Detecting adversarial examples with inductive Venn-ABERS predictors

    Get PDF
    Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretical guarantee that their predictions are perfectly calibrated. We propose to exploit this calibration property for the detection of adversarial examples in binary classification tasks. By rejecting predictions if the uncertainty of the IVAP is too high, we obtain an algorithm that is both accurate on the original test set and significantly more robust to adversarial examples. The method appears to be competitive to the state of the art in adversarial defense, both in terms of robustness as well as scalabilit

    Conformal Prediction: a Unified Review of Theory and New Challenges

    Full text link
    In this work we provide a review of basic ideas and novel developments about Conformal Prediction -- an innovative distribution-free, non-parametric forecasting method, based on minimal assumptions -- that is able to yield in a very straightforward way predictions sets that are valid in a statistical sense also in in the finite sample case. The in-depth discussion provided in the paper covers the theoretical underpinnings of Conformal Prediction, and then proceeds to list the more advanced developments and adaptations of the original idea.Comment: arXiv admin note: text overlap with arXiv:0706.3188, arXiv:1604.04173, arXiv:1709.06233, arXiv:1203.5422 by other author

    Quantum Conformal Prediction for Reliable Uncertainty Quantification in Quantum Machine Learning

    Full text link
    In this work, we aim at augmenting the decisions output by quantum models with "error bars" that provide finite-sample coverage guarantees. Quantum models implement implicit probabilistic predictors that produce multiple random decisions for each input through measurement shots. Randomness arises not only from the inherent stochasticity of quantum measurements, but also from quantum gate noise and quantum measurement noise caused by noisy hardware. Furthermore, quantum noise may be correlated across shots and it may present drifts in time. This paper proposes to leverage such randomness to define prediction sets for both classification and regression that provably capture the uncertainty of the model. The approach builds on probabilistic conformal prediction (PCP), while accounting for the unique features of quantum models. Among the key technical innovations, we introduce a new general class of non-conformity scores that address the presence of quantum noise, including possible drifts. Experimental results, using both simulators and current quantum computers, confirm the theoretical calibration guarantees of the proposed framework.Comment: added detailed discussion on quantum hardware nois

    Hardening DGA classifiers utilizing IVAP

    Get PDF
    Domain Generation Algorithms (DGAs) are used by malware to generate a deterministic set of domains, usually by utilizing a pseudo-random seed. A malicious botmaster can establish connections between their command-and-control center (C&C) and any malware-infected machines by registering domains that will be DGA-generated given a specific seed, rendering traditional domain blacklisting ineffective. Given the nature of this threat, the real-time detection of DGA domains based on incoming DNS traffic is highly important. The use of neural network machine learning (ML) models for this task has been well-studied, but there is still substantial room for improvement. In this paper, we propose to use Inductive Venn-Abers predictors (IVAPs) to calibrate the output of existing ML models for DGA classification. The IVAP is a computationally efficient procedure which consistently improves the predictive accuracy of classifiers at the expense of not offering predictions for a small subset of inputs and consuming an additional amount of training data

    Random sets and exact confidence regions

    Full text link
    An important problem in statistics is the construction of confidence regions for unknown parameters. In most cases, asymptotic distribution theory is used to construct confidence regions, so any coverage probability claims only hold approximately, for large samples. This paper describes a new approach, using random sets, which allows users to construct exact confidence regions without appeal to asymptotic theory. In particular, if the user-specified random set satisfies a certain validity property, confidence regions obtained by thresholding the induced data-dependent plausibility function are shown to have the desired coverage probability.Comment: 14 pages, 2 figure
    corecore