5,885 research outputs found

    A Behavioral Economics Perspective on the Formation and Effects of Privacy Risk Perceptions in the Context of Privacy-Invasive Information Systems

    Get PDF
    In recent years, more and more information systems are proliferating that gather, process and analyze data about the environment they are deployed in. This data oftentimes refers to individuals using these systems or being located in their surroundings, in which case it is referred to as personal information. Once such personal information is gathered by an information system, it is usually out of a users’ control how and for which purpose this information is processed or stored. Users are well aware that this loss of control about their personal information can be associated with negative long-term effects due to exploitation and misuse of the information they provided. This makes using information systems that gather this kind of information a double-edged sword. One can either use such systems and realize their utility but thereby threaten ones’ own privacy, or one can keep ones’ privacy intact but forego the benefits provided by the information system. The decision whether to adopt this type of information system therefore represents a tradeoff between benefits and risks. The vast majority of information systems privacy research to date assumed that this tradeoff is dominated by deliberate analyses and rational considerations, which lead to fully informed privacy-related attitudes and behaviors. However, models based on these assumptions often fail to accurately predict real-life behaviors and lead to confounding empirical observations. This thesis therefore investigates, in how far the risk associated with disclosing personal information to privacy-invasive information systems influences user behavior against the background of more complex models of human decision-making. The results of these investigations have been published in three scientific publications, of which this cumulative doctoral thesis is comprised. These publications are based on three large-scale empirical studies employing experimental approaches and being underpinned by qualitative as well as quantitative pre-studies. The studies are guided by and focus on different stages of the process of perceiving, evaluating and mentally processing privacy risk perceptions in considerations whether to disclose personal information and ultimately use privacy-invasive information systems. The first study addresses different conceptualizations of privacy-related behaviors, which are oftentimes used interchangeably in privacy research, despite it has never been investigated whether they are indeed equivalent: Intentions to disclose personal information to an information system and intentions to use an information system (and thereby disclose information). By transferring the multiple-selves-problem to information systems privacy research, theoretical arguments are developed and empirical evidence is provided that those two intentions are (1) conceptually different and (2) formed in different cognitive processes. A vignette-based factorial survey with 143 participants is used to show, that while risk perceptions have more impact on disclosure intentions than on usage intentions, the opposite holds for the hedonic benefits provided by the information system. These have more impact on usage intentions than on disclosure intentions. The second study moves one step further by addressing systematically different mental processing of perceived risks and benefits of information disclosure when considering only one dependent variable. In particular, the assumption that the perceived benefits and risks of information disclosure possess additive utility and are therefore weighted against each other by evaluating a simple utility function like “Utility = Benefit – Cost” is investigated. Based on regulatory focus theory and an experimental pre-study with 59 participants, theoretical arguments are developed, that (1) the perception of high privacy risks evokes a state of heightened vigilance named prevention-focus and (2) this heightened vigilance in turn changes the weighting of the perceived benefits and risks in the deliberation whether to disclose personal information. Results from a second survey-based study with 208 participants then provide empirical evidence, that perceptions of high risks of information disclosure in fact evoke a prevention focus in individuals. This prevention focus in turn increases the negative effect of the perceived risks and reduces the positive effect of the perceived benefits of information disclosure on an individuals’ intention to disclose personal information. Instead of investigating the processing of risk perceptions, the third study presented in this thesis focuses on the formation of such perceptions. The focus is therefore on the process of selecting, organizing and interpreting objective cues or properties of information systems when forming perceptions about how much privacy risk is associated with using the system. Based on an experimental survey study among 233 participants the findings show, that individuals in fact have difficulties evaluating privacy risks. In particular, (1) the formation of privacy risk perceptions is dependent on external reference information and (2) when such external reference information is available, individuals are enabled to form more confident risk judgments, which in turn have a stronger impact on an individual’s privacy-related behavior. These findings suggest a reconceptualization of privacy risks as not only being characterized by an extremity (how much risk is perceived) but also the dimension of confidence in ones’ own risk perception. Overall, the research findings of the three studies presented in this thesis show, that widely accepted assumptions underlying information systems privacy research are severely oversimplified. The results therefore contribute significantly to an improved understanding of the mental processes and mechanisms leading to the acceptance of privacy-invasive information systems

    DISTINGUISHING USAGE AND DISCLOSURE INTENTIONS IN PRIVACY RESEARCH: HOW OUR TWO SELVES BRING ABOUT DIFFERENCES IN THE EFFECTS OF BENEFITS AND RISKS

    Get PDF
    Two different conceptualizations of behavioral intentions are oftentimes interchangeably used as dependent variables in privacy research: Intentions to disclose personal information to an information system (IS) and intentions to use an IS (and thereby disclose information). However, the assumption that those two conceptualizations are indeed interchangeable has not been tested yet and, if rebutted, imposes limitations when comparing and integrating results of studies using either of them. By transferring the multiple selves problem to IS privacy research, we develop theoretical arguments and provide empirical evidence that those two intentions are a) conceptually different and b) formed in different cognitive processes. A vignette-based factorial survey with 143 participants is used to show, that while risk perceptions have more impact on disclosure intentions than on usage intentions, the opposite holds for hedonic benefits

    Privacy Risk Perceptions in the Connected Car Context

    Get PDF
    Connected car services are rapidly diffusing as they promise to significantly enhance the overall driving experience. Because they rely on the collection and exploitation of car data, however, such services are associated with significant privacy risks. Following guidelines on contextualized theorizing, this paper examines how individuals perceive these risks and how their privacy risk perceptions in turn influence their decision-making, i.e., their willingness to share car data with the car manufacturer or other service providers. We conducted a multi-method study, including interviews and a survey in Germany. We found that individuals’ level of perceived privacy risk is determined by their evaluation of the general likelihood of IS-specific threats and the belief of personal exposure to such threats. Two cognitive factors, need for cognition and institutional trust, are found to moderate the effect that perceived privacy risk has on individuals’ willingness to share car data in exchange for connected car services

    Information Disclosure and Online Social Networks: From the Case of Facebook News Feed Controversy to a Theoretical Understanding

    Get PDF
    Based on the insights learned from the case analysis of the Facebook News Feed outcry, we develop a theoretical understanding that identifies major drivers and impediments of information disclosure in Online Social Networks (OSNs). Research propositions are derived to highlight the roles of privacy behavioral responses, privacy concerns, perceived information control, trust in OSN providers, trust in social ties, and organizational privacy interventions. The synthesis of privacy literature, bounded rationality and trust theories provides a rich understanding of the adoption of OSNs that creates privacy and security vulnerabilities, and therefore, informs the privacy research in the context of OSNs. The findings are also potentially useful to privacy advocates, regulatory bodies, OSN providers, and marketers to help shape or justify their decisions concerning OSNs

    Ethical guidelines for nudging in information security & privacy

    Get PDF
    There has recently been an upsurge of interest in the deployment of behavioural economics techniques in the information security and privacy domain. In this paper, we consider first the nature of one particular intervention, the nudge, and the way it exercises its influence. We contemplate the ethical ramifications of nudging, in its broadest sense, deriving general principles for ethical nudging from the literature. We extrapolate these principles to the deployment of nudging in information security and privacy. We explain how researchers can use these guidelines to ensure that they satisfy the ethical requirements during nudge trials in information security and privacy. Our guidelines also provide guidance to ethics review boards that are required to evaluate nudge-related research

    Shared Benefits and Information Privacy: What Determines Smart Meter Technology Adoption?

    Get PDF
    An unexplored gap in IT adoption research concerns the positive role of shared benefits even when personal information is exposed. To explore the evaluation paradigm of shared benefits versus the forfeiture of personal information, we analyze how utility consumers use smart metering technology (SMT). In this context, utility companies can monitor electricity usage and directly control consumers’ appliances to disable them during peak load conditions. Such information could reveal consumers’ habits and lifestyles and, thus, stimulating concerns about their privacy and the loss of control over their appliances. Responding to calls for theory contextualization, we assess the efficacy of applying extant adoption theories in this emergent context while adding the perspective of the psychological ownership of information. We use the factorial survey method to assess consumers’ intentions to adopt SMT in the presence of specific conditions that could reduce the degree of their privacy or their control over their appliances and electricity usage data. Our findings suggest that, although the shared benefit of avoiding disruptions in electricity supply (brownouts) is a significant factor in electricity consumers’ decisions to adopt SMT, concerns about control and information privacy are also factors. Our findings extend the previous adoption research by exploring the role of shared benefits and could provide utility companies with insights into the best ways to present SMT to alleviate consumers’ concerns and maximize its adoption

    Beyond the Privacy Calculus: Dynamics Behind Online Self-Disclosure

    Get PDF
    Self-disclosure is ubiquitous in today’s digitized world as Internet users are constantly sharing their personal information with other users and providers online, for example when communicating via social media or shopping online. Despite offering tremendous benefits (e.g., convenience, personalization, and other social rewards) to users, the act of self-disclosure also raises massive privacy concerns. In this regard, Internet users often feel they have lost control over their privacy because sophisticated technologies are monitoring, processing, and circulating their personal information in real-time. Thus, they are faced with the challenge of making intelligent privacy decisions about when, how, to whom, and to what extent they should divulge personal information. They feel the tension between being able to obtain benefits from online disclosure and wanting to protect their privacy. At the same time, firms rely on massive amounts of data divulged by their users to offer personalized services, perform data analytics, and pursue monetization. Traditionally, privacy research has applied the privacy calculus model when studying self-disclosure decisions online. It assumes that self-disclosure (or, sometimes, usage) is a result of a rational privacy risk–benefit analysis. Even though the privacy calculus is a plausible model that has been validated in many cases, it does not reflect the complex nuances of privacy-related judgments against the background of real-life behavior, which sometimes leads to paradoxical research results. This thesis seeks to understand and disentangle the complex nuances of Internet users’ privacy-related decision making to help firms designing data gathering processes, guide Internet users wishing to make sound privacy decisions given the background of their preferences, and lay the groundwork for future research in this field. Using six empirical studies and two literature reviews, this thesis presents additional factors that influence self-disclosure decisions beyond the well-established privacy risk–benefit analysis. All the studies have been published in peer-reviewed journals or conference proceedings. They focus on different contexts and are grouped into three parts accordingly: monetary valuation of privacy, biases in disclosure decisions, and social concerns when self-disclosing on social networking sites. The first part deals with the value Internet users place on their information privacy as a proxy for their perceived privacy risks when confronted with a decision to self-disclose. A structured literature review reveals that users’ monetary valuation of privacy is very context-dependent, which leads to scattered or occasionally even contradictory research results. A subsequent conjoint analysis supplemented by a qualitative pre-study shows that the amount of compensation, the type of data, and the origin of the platform are the major antecedents of Internet users’ willingness to sell their data on data selling platforms. Additionally, an experimental survey study contrasts the value users ascribe to divulging personal information (benefits minus risks) with the value the provider gets from personal information. Building on equity theory, the extent to which providers monetize the data needs to be taken into account apart from a fair data handling process. In other words, firms cannot monetize their collected user data indefinitely without compensating their users, because users might feel exploited and thus reject the service afterwards. The second part delineates the behavioral and cognitive biases overriding the rational tradeoff between benefits and privacy risks that has traditionally been assumed in privacy research. In particular, evaluability bias and overconfidence are identified as moderators of the link between privacy risks and self-disclosure intentions. In single evaluation mode (i.e., no reference information available) and when they are overconfident, Internet users do not take their perceived privacy risks into account when facing a self-disclosure decision. By contrast, in joint evaluation mode of two information systems and when users are realistic about their privacy-related knowledge, the privacy risks that they perceive play a major role. This proof that mental shortcuts interact with privacy-related judgments adds to studies that question the rational assumption of the privacy calculus. Moving beyond privacy risks, the third part examines the social factors influencing disclosure decisions. A structured literature review identifies privacy risks as the predominantly studied impediment to self-disclosure on social networking sites (SNS). However, a subsequent large scale survey study shows that on SNS, privacy risks play no role when users decide whether to self-disclose. It is rather the social aspects, such as the fear of receiving a negative evaluation from others, that inform disclosure decisions. Furthermore, based on a dyadic study among senders and receivers of messages on SNS, it is shown that senders are subject to a perspective-taking bias: They overestimate the hedonic and utilitarian value of their message for others. In this vein, these studies combine insights from social psychology literature with the uniqueness of online data disclosure and show that, beyond the potential misuse of personal information from providers, the risk of misperception in the eyes of other users is crucial when explaining self-disclosure decisions. All in all, this thesis draws from different perspectives – including value measuring approaches, behavioral economics, and social psychology – to explain self-disclosure decisions. Specifically, it shows that the privacy calculus is oversimplified and, ultimately, needs to be extended with other factors like mental shortcuts and social concerns to portray Internet users’ actual privacy decision making

    Perceived privacy risk in the Internet of Things: determinants, consequences, and contingencies in the case of connected cars

    Get PDF
    The Internet of Things (IoT) is permeating all areas of life. However, connected devices are associated with substantial risks to users’ privacy, as they rely on the collection and exploitation of personal data. The case of connected cars demonstrates that these risks may be more profound in the IoT than in extant contexts, as both a user's informational and physical space are intruded. We leverage this unique setting to collect rich context-immersive interview (n = 33) and large-scale survey data (n = 791). Our work extends prior theory by providing a better understanding of the formation of users’ privacy risk perceptions, the effect such perceptions have on users’ willingness to share data, and how these relationships in turn are affected by inter-individual differences in individuals’ regulatory focus, thinking style, and institutional trust
    • 

    corecore