308 research outputs found

    On the Differential Privacy of Bayesian Inference

    Get PDF
    We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on proba-bilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian na{\"i}ve Bayes and Bayesian linear regression illustrate the application of our mechanisms.Comment: AAAI 2016, Feb 2016, Phoenix, Arizona, United State

    Encrypted accelerated least squares regression.

    Get PDF
    Information that is stored in an encrypted format is, by definition, usually not amenable to statistical analysis or machine learning methods. In this paper we present detailed analysis of coordinate and accelerated gradient descent algorithms which are capable of fitting least squares and penalised ridge regression models, using data encrypted under a fully homomorphic encryption scheme. Gradient descent is shown to dominate in terms of encrypted computational speed, and theoretical results are proven to give parameter bounds which ensure correctness of decryption. The characteristics of encrypted computation are empirically shown to favour a non-standard acceleration technique. This demonstrates the possibility of approximating conventional statistical regression methods using encrypted data without compromising privacy

    Differentially Private Statistical Inference through β\beta-Divergence One Posterior Sampling

    Full text link
    Differential privacy guarantees allow the results of a statistical analysis involving sensitive data to be released without compromising the privacy of any individual taking part. Achieving such guarantees generally requires the injection of noise, either directly into parameter estimates or into the estimation process. Instead of artificially introducing perturbations, sampling from Bayesian posterior distributions has been shown to be a special case of the exponential mechanism, producing consistent, and efficient private estimates without altering the data generative process. The application of current approaches has, however, been limited by their strong bounding assumptions which do not hold for basic models, such as simple linear regressors. To ameliorate this, we propose β\betaD-Bayes, a posterior sampling scheme from a generalised posterior targeting the minimisation of the β\beta-divergence between the model and the data generating process. This provides private estimation that is generally applicable without requiring changes to the underlying model and consistently learns the data generating parameter. We show that β\betaD-Bayes produces more precise inference estimation for the same privacy guarantees, and further facilitates differentially private estimation via posterior sampling for complex classifiers and continuous regression models such as neural networks for the first time

    Privacy-preserving pandemic monitoring

    Get PDF

    Regularised Volterra series models for modelling of nonlinear self-excited forces on bridge decks

    Get PDF
    Volterra series models are considered an attractive approach for modelling nonlinear aerodynamic forces for bridge decks since they extend the convolution integral to higher dimensions. Optimal identification of nonlinear systems is a challenging task since there are typically many unknown variables that need to be determined, and it is vital to avoid overfitting. Several methods exist for identifying Volterra kernels from experimental data, but a large class of them put restrictions on the system inputs, making them infeasible for section model tests of bridge decks. A least-squares identification method does not restrict the inputs, but the identified model often struggles with noisy (non-smooth) kernels, which is deemed to be unphysical and a sign of overfitting. In this work, regularised least-squares identification is introduced to improve the performance of model identification using least-squares. Standard Tikhonov regularisation and other penalty techniques that impose decaying kernels are also explored. The performance of the methodology is studied using experimental data from wind tunnel tests of a twin deck section. The regularised Volterra models show equal or better results in terms of modelling the self-excited forces, and the regularisation makes the models less prone to overfitting

    Encrypted accelerated least squares regression.

    Get PDF
    Information that is stored in an encrypted format is, by definition, usually not amenable to statistical analysis or machine learning methods. In this paper we present detailed analysis of coordinate and accelerated gradient descent algorithms which are capable of fitting least squares and penalised ridge regression models, using data encrypted under a fully homomorphic encryption scheme. Gradient descent is shown to dominate in terms of encrypted computational speed, and theoretical results are proven to give parameter bounds which ensure correctness of decryption. The characteristics of encrypted computation are empirically shown to favour a non-standard acceleration technique. This demonstrates the possibility of approximating conventional statistical regression methods using encrypted data without compromising privacy
    corecore