394 research outputs found

    Differential Privacy for Deep Learning-based Online Energy Disaggregation System

    Get PDF

    Trustworthy AI for Huge Data Generation and Process from IoT Devices (White Paper)

    Get PDF

    Perturbing Inputs to Prevent Model Stealing

    Full text link
    We show how perturbing inputs to machine learning services (ML-service) deployed in the cloud can protect against model stealing attacks. In our formulation, there is an ML-service that receives inputs from users and returns the output of the model. There is an attacker that is interested in learning the parameters of the ML-service. We use the linear and logistic regression models to illustrate how strategically adding noise to the inputs fundamentally alters the attacker's estimation problem. We show that even with infinite samples, the attacker would not be able to recover the true model parameters. We focus on characterizing the trade-off between the error in the attacker's estimate of the parameters with the error in the ML-service's output
    corecore