3 research outputs found

    Integrally private model selection for decision trees

    Get PDF
    Privacy attacks targeting machine learning models are evolving. One of the primary goals of such attacks is to infer information about the training data used to construct the models. “Integral Privacy”focuses on machine learning and statistical models which explain how we can utilize intruder’s uncertainty to provide a privacy guarantee against model comparison attacks. Through experimental results, we show how the distribution of models can be used to achieve integral privacy. Here, we observe two categories of machine learning models based on their frequency of occurrence in the model space. Then we explain the privacy implications of selecting each of them based on a new attack model and empirical results. Also, we provide recommendations for private model selection based on the accuracy and stabil- ity of the models along with the diversity of training data that can be used to generate the models
    corecore