4 research outputs found

    Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals

    Get PDF
    Recent studies have shown that information disclosed on social network sites (such as Facebook) can be used to predict personal characteristics with surprisingly high accuracy. In this paper we examine a method to give online users transparency into why certain inferences are made about them by statistical models, and control to inhibit those inferences by hiding ("cloaking") certain personal information from inference. We use this method to examine whether such transparency and control would be a reasonable goal by assessing how difficult it would be for users to actually inhibit inferences. Applying the method to data from a large collection of real users on Facebook, we show that a user must cloak only a small portion of her Facebook Likes in order to inhibit inferences about their personal characteristics. However, we also show that in response a firm could change its modeling of users to make cloaking more difficult.Comment: presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, N

    Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals

    Get PDF
    Recent studies show the remarkable power of information disclosed by users on social network sites to infer the users' personal characteristics via predictive modeling. In response, attention is turning increasingly to the transparency that sites provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. We draw on the evidence counterfactual as a means for providing transparency into why particular inferences are drawn about them. We then introduce the idea of a \cloaking device" as a vehicle to provide (and to study) control. Specifically, the cloaking device provides a mechanism for users to inhibit the use of particular pieces of information in inference; combined with the transparency provided by the evidence counterfactual a user can control model-driven inferences, while minimizing the amount of disruption to her normal activity. Using these analytical tools we ask two main questions: (1) How much information must users cloak in order to significantly affect inferences about their personal traits? We find that usually a user must cloak only a small portion of her actions in order to inhibit inference. We also find that, encouragingly, false positive inferences are significantly easier to cloak than true positive inferences. (2) Can firms change their modeling behavior to make cloaking more difficult? The answer is a definitive yes. In our main results we replicate the methodology of Kosinski et al. (2013) for modeling personal traits; then we demonstrate a simple modeling change that still gives accurate inferences of personal traits, but requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make modeling choices to make control easier or harder for their users.Columbia University, New York University, NYU Stern School of Business, NYU Center for Data Scienc

    Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals

    Get PDF
    Abstract Recent studies show the remarkable power of information disclosed by users on social network sites to infer the users' personal characteristics via predictive modeling. In response, attention is turning increasingly to the transparency that sites provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. We draw on the evidence counterfactual as a means for providing transparency into why particular inferences are drawn about them. We then introduce the idea of a "cloaking device" as a vehicle to provide (and to study) control. Specifically, the cloaking device provides a mechanism for users to inhibit the use of particular pieces of information in inference; combined with the transparency provided by the evidence counterfactual a user can control model-driven inferences, while minimizing the amount of disruption to her normal activity. Using these analytical tools we ask two main questions: (1) How much information must users cloak in order to significantly affect inferences about their personal traits? We find that usually a user must cloak only a small portion of her actions in order to inhibit inference. We also find that, encouragingly, false positive inferences are significantly easier to cloak than true positive inferences. gives accurate inferences of personal traits, but requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make modeling choices to make control easier or harder for their users
    corecore