7 research outputs found
Perturbing Inputs to Prevent Model Stealing
We show how perturbing inputs to machine learning services (ML-service)
deployed in the cloud can protect against model stealing attacks. In our
formulation, there is an ML-service that receives inputs from users and returns
the output of the model. There is an attacker that is interested in learning
the parameters of the ML-service. We use the linear and logistic regression
models to illustrate how strategically adding noise to the inputs fundamentally
alters the attacker's estimation problem. We show that even with infinite
samples, the attacker would not be able to recover the true model parameters.
We focus on characterizing the trade-off between the error in the attacker's
estimate of the parameters with the error in the ML-service's output
Recommended from our members
Deep partial least squares for instrumental variable regression
In this paper, we propose deep partial least squares for the estimation of high-dimensional nonlinear instrumental variable regression. As a precursor to a flexible deep neural network architecture, our methodology uses partial least squares for dimension reduction and feature selection from the set of instruments and covariates. A central theoretical result, due to Brillinger (2012) Selected Works of Daving Brillinger. 589-606, shows that the feature selection provided by partial least squares is consistent and the weights are estimated up to a proportionality constant. We illustrate our methodology with synthetic datasets with a sparse and correlated network structure and draw applications to the effect of childbearing on the mother's labor supply based on classic data of Chernozhukov et al. Ann Rev Econ. (2015b):649β688. The results on synthetic data as well as applications show that the deep partial least squares method significantly outperforms other related methods. Finally, we conclude with directions for future research