5 research outputs found

    Counteracting the Negative Effect of Form Auto-completion on the Privacy Calculus

    Get PDF
    When filling out web forms, people typically do not want to submit every piece of requested information to every website. Instead, they selectively disclose information after weighing the potential benefits and risks of disclosure: a process called “privacy calculus”. Giving users control over what to enter is a prerequisite for this selective disclosure behavior. Exercising this control by manually filling out a form is a burden though. Modern browsers therefore offer an auto-completion feature that automatically fills out forms with previously stored values. This feature is convenient, but it makes it so easy to submit a fully completed form that users seem to skip the privacy calculus altogether. In an experiment we compare this traditional auto-completion tool with two alternative tools that give users more control than the traditional tool. While users of the traditional tool indeed forego their selective disclosure behavior, the alternative tools effectively reinstate the privacy calculus

    Thinking Styles and Privacy Decisions: Need for Cognition, Faith into Intuition, and the Privacy Calculus

    Get PDF
    Investigating cognitive processes that underlie privacy-related decisions, prior research has primarily adopted a privacy calculus view, indicating privacy-related decisions to constitute rational anticipations of risks and benefits connected to data disclosure. Referring to psychological limitations and heuristic thinking, however, recent research has discussed notions of bounded rationality in this context. Adopting this view, the current research argues that privacy decisions are guided by thinking styles, i.e. individual preferences to decide in an either rational or intuitive way. Results of a survey indicated that individuals high in rational thinking, as reflected by a high need for cognition, anticipated and weighed risk and benefits more thoroughly. In contrast, individuals relying on experiential thinking (as reflected by a high faith into intuition) overleaped rational considerations and relied on their hunches rather than a privacy calculus when assessing intentions to disclose information. Theoretical and practical implications of these findings are discussed

    Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals

    Get PDF
    Recent studies show the remarkable power of information disclosed by users on social network sites to infer the users' personal characteristics via predictive modeling. In response, attention is turning increasingly to the transparency that sites provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. We draw on the evidence counterfactual as a means for providing transparency into why particular inferences are drawn about them. We then introduce the idea of a \cloaking device" as a vehicle to provide (and to study) control. Specifically, the cloaking device provides a mechanism for users to inhibit the use of particular pieces of information in inference; combined with the transparency provided by the evidence counterfactual a user can control model-driven inferences, while minimizing the amount of disruption to her normal activity. Using these analytical tools we ask two main questions: (1) How much information must users cloak in order to significantly affect inferences about their personal traits? We find that usually a user must cloak only a small portion of her actions in order to inhibit inference. We also find that, encouragingly, false positive inferences are significantly easier to cloak than true positive inferences. (2) Can firms change their modeling behavior to make cloaking more difficult? The answer is a definitive yes. In our main results we replicate the methodology of Kosinski et al. (2013) for modeling personal traits; then we demonstrate a simple modeling change that still gives accurate inferences of personal traits, but requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make modeling choices to make control easier or harder for their users.Columbia University, New York University, NYU Stern School of Business, NYU Center for Data Scienc

    Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals

    Get PDF
    Abstract Recent studies show the remarkable power of information disclosed by users on social network sites to infer the users' personal characteristics via predictive modeling. In response, attention is turning increasingly to the transparency that sites provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. We draw on the evidence counterfactual as a means for providing transparency into why particular inferences are drawn about them. We then introduce the idea of a "cloaking device" as a vehicle to provide (and to study) control. Specifically, the cloaking device provides a mechanism for users to inhibit the use of particular pieces of information in inference; combined with the transparency provided by the evidence counterfactual a user can control model-driven inferences, while minimizing the amount of disruption to her normal activity. Using these analytical tools we ask two main questions: (1) How much information must users cloak in order to significantly affect inferences about their personal traits? We find that usually a user must cloak only a small portion of her actions in order to inhibit inference. We also find that, encouragingly, false positive inferences are significantly easier to cloak than true positive inferences. gives accurate inferences of personal traits, but requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make modeling choices to make control easier or harder for their users
    corecore