1,138 research outputs found

    Privacy-Preserving Adversarial Networks

    Full text link
    We propose a data-driven framework for optimizing privacy-preserving data release mechanisms to attain the information-theoretically optimal tradeoff between minimizing distortion of useful data and concealing specific sensitive information. Our approach employs adversarially-trained neural networks to implement randomized mechanisms and to perform a variational approximation of mutual information privacy. We validate our Privacy-Preserving Adversarial Networks (PPAN) framework via proof-of-concept experiments on discrete and continuous synthetic data, as well as the MNIST handwritten digits dataset. For synthetic data, our model-agnostic PPAN approach achieves tradeoff points very close to the optimal tradeoffs that are analytically-derived from model knowledge. In experiments with the MNIST data, we visually demonstrate a learned tradeoff between minimizing the pixel-level distortion versus concealing the written digit.Comment: 16 page

    Predictive Uncertainty through Quantization

    Get PDF
    High-risk domains require reliable confidence estimates from predictive models. Deep latent variable models provide these, but suffer from the rigid variational distributions used for tractable inference, which err on the side of overconfidence. We propose Stochastic Quantized Activation Distributions (SQUAD), which imposes a flexible yet tractable distribution over discretized latent variables. The proposed method is scalable, self-normalizing and sample efficient. We demonstrate that the model fully utilizes the flexible distribution, learns interesting non-linearities, and provides predictive uncertainty of competitive quality

    A Survey on Uncertainty Estimation in Deep Learning Classification Systems from a Bayesian Perspective

    Get PDF
    Decision-making based on machine learning systems, especially when this decision-making can affect humanlives, is a subject of maximum interest in the Machine Learning community. It is, therefore, necessary to equipthese systems with a means of estimating uncertainty in the predictions they emit in order to help practition-ers make more informed decisions. In the present work, we introduce the topic of uncertainty estimation, andwe analyze the peculiarities of such estimation when applied to classification systems. We analyze differentmethods that have been designed to provide classification systems based on deep learning with mechanismsfor measuring the uncertainty of their predictions. We will take a look at how this uncertainty can be mod-eled and measured using different approaches, as well as practical considerations of different applications ofuncertainty. Moreover, we review some of the properties that should be borne in mind when developing suchmetrics. All in all, the present survey aims at providing a pragmatic overview of the estimation of uncertaintyin classification systems that can be very useful for both academic research and deep learning practitioners

    Further Details on Predicting IRT Difficulty

    Full text link
    This supplementary material serves as technical appendix of the paper When AI Difficulty is Easy: The Explanatory Power of Predicting IRT Difficulty (Martínez-Plumed et al. 2022), published in The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22). The following sections give detailed information about 1) data gathering for benchmarks; 2) IRT properties and methodology followed; 3) learning models configuration and hyperparameter setting; 4) differences between difficulty prediction and class prediction; 5) the deployment and results of alternative approaches for difficulty estimation; 6) specifics and results using a generic difficulty metric in different applications and 7) extended IRT applications.Martínez Plumed, F.; Castellano Falcón, D.; Monserrat Aranda, C.; Hernández Orallo, J. (2022). Further Details on Predicting IRT Difficulty. http://hdl.handle.net/10251/18133
    • …
    corecore