92,825 research outputs found

    Research on Privacy Paradox in Social Networks Based on Evolutionary Game Theory and Data Mining

    Get PDF
    In order to obtain social benefits, social networks have started taking benefits from private information of network users. While having increased concerns about the risk of privacy disclosure, users still generally disclosed under high privacy concerns, which directly formed the privacy paradox. The expansion and generalization of privacy paradox indicate that the implementation of privacy protection in social networks is still in a dilemma. Studying and solving the problem of privacy paradox is conducive to ensure the healthy development of social network industry. Based on this, this study has designed a research system that analyzes the privacy paradox of social networks from three dimensions: cause, existence and form. After studying existing research of privacy paradox in social networks, evolutionary game theory is determined to be introduced into the procedure of cause analysis, while data mining is used as a data analysis method for empirical research. Within the whole research process, the evolutionary game model of privacy paradox in social networks is built up first, while the necessary conditions for the generation of privacy paradox is addressed, which is derived from the evolutionary stable strategy. Secondly, the questionnaire survey method is used to collect private data of active users of both Weibo and WeChat. Lastly, Apriori and CHAID algorithm are used to determine the relationship of user privacy concerns, privacy behavior, and other factors, which then confirms the existence of privacy paradox on two social networks and makes a comparison between their forms of privacy paradox in specific. This research systematically makes a useful an in-depth analysis to the privacy paradox in social networks and is meaningful for establishing a hierarchical protection system of users\u27 privacy for enterprises

    Unpacking Privacy Paradox: A Dual Process Theory Approach

    Get PDF
    Prior studies showed that some users tend to act against their stated privacy concerns (a phenomenon commonly known as privacy paradox). In this study, we adopt the dual process theory as our theoretical basis to account for conscious and unconscious modes of individual decision making processes to examine privacy paradox in order to gain an understanding of the reasons which cause inconsistency between privacy concern and information disclosure. We also posit that privacy paradox can occur due to the conscious mode (affected by bounded rationality) as well as unconscious biases

    The Privacy Paradox

    Get PDF

    The paradox of wanting privacy but behaving as if it didn't matter

    Get PDF

    Explaining the Privacy Paradox through Identifying Boundary Conditions of the Relationship between Privacy Concerns and Disclosure Behaviors

    Get PDF
    The privacy paradox phenomenon suggests that individuals tend to make privacy decisions (i.e., disclosure of personal information) that contradict their dispositional privacy concerns. Despite the emerging research attempting to explain this phenomenon, it remains unclear why the privacy paradox exists. In order to explain why it exists and to be able to predict occurrences of privacy paradoxical decisions, this dissertation emphasizes the need to identify boundary conditions of the relationship between privacy concerns and disclosure behaviors. Across three empirical research studies varying in their contexts, this dissertation presents a total of seven boundary conditions (i.e., cognitive absorption, cognitive resource depletion, positive mood state, privacy control, convenience, empathic concern, and social nudging) that can explain why privacy concerns sometimes do not predict disclosure behaviors (i.e., the privacy paradox). The approach of identifying the boundary conditions advances privacy theories by establishing a theoretically sounder causal link between privacy concerns and disclosure behaviors while contributing to enhancing privacy policies, organizational privacy practices, and individuals’ privacy decisions

    The Privacy Paradox

    Get PDF

    The Myth of the Privacy Paradox

    Get PDF
    In this article, Professor Daniel Solove deconstructs and critiques the privacy paradox and the arguments made about it. The “privacy paradox” is the phenomenon where people say that they value privacy highly, yet in their behavior relinquish their personal data for very little in exchange or fail to use measures to protect their privacy. Commentators typically make one of two types of arguments about the privacy paradox. On one side, the “behavior valuation argument” contends behavior is the best metric to evaluate how people actually value privacy. Behavior reveals that people ascribe a low value to privacy or readily trade it away for goods or services. The argument often goes on to contend that privacy regulation should be reduced. On the other side, the “behavior distortion argument” argues that people’s behavior isn’t an accurate metric of preferences because behavior is distorted by biases and heuristics, manipulation and skewing, and other factors. In contrast to both of these camps, Professor Solove argues that the privacy paradox is a myth created by faulty logic. The behavior involved in privacy paradox studies involves people making decisions about risk in very specific contexts. In contrast, people’s attitudes about their privacy concerns or how much they value privacy are much more general in nature. It is a leap in logic to generalize from people’s risk decisions involving specific personal data in specific contexts to reach broader conclusions about how people value their own privacy. The behavior in the privacy paradox studies doesn’t lead to a conclusion for less regulation. On the other hand, minimizing behavioral distortion will not cure people’s failure to protect their own privacy. It is perfectly rational for people — even without any undue influences on behavior — to fail to make good assessments of privacy risks and to fail to manage their privacy effectively. Managing one’s privacy is a vast, complex, and never-ending project that does not scale; it becomes virtually impossible to do comprehensively. Privacy regulation often seeks to give people more privacy self-management, such as the recent California Consumer Privacy Act. Professor Solove argues that giving individuals more tasks for managing their privacy will not provide effective privacy protection. Instead, regulation should employ a different strategy — focus on regulating the architecture that structures the way information is used, maintained, and transferred

    THE PERSONALIZATION PRIVACY PARADOX: THE IMPACT OF PERCEIVED DATA SENSITIVITY ON THE EFFECT OF TRANSPARENCY FEATURES

    Get PDF
    Personalized services and advertisements are getting more common, as bigger amounts of data are available, which enables possibilities for reshaped business models. Personalization can bring benefits for users, for example by reducing information overload. In contrast, the collection of data, which is necessary for personalization, can increase users’ privacy concerns. The tension between the benefits of personalization and the privacy concerns is called the personalization privacy paradox. Investigating this paradox can be essential for the personalization provider to get a better understanding when to use and when not to use personalization as best for their business. To gain a better understanding of this paradox, a structured literature review is conducted in this work. It provides a broad overview over the current literature of the personalization privacy paradox. Building on the outcomes of the literature review, a quantitative analysis is conducted to investigate the role of transparency features in different data sensitivity contexts within the personalization privacy paradox. The outcomes indicate that transparency features can have a different impact, depending on the different data sensitivity settings

    The Role of the Privacy Calculus and the Privacy Paradox in the Acceptance of Wearables for Health and Wellbeing

    Get PDF
    The Internet along with innovations in technology have inspired an industry focused on designing portable devices, known as wearables that can track users’ personal activities and wellbeing. While such technologies have many benefits, they also have risks (especially regarding information privacy and security). These concerns become even more pronounced with healthcare-related wearables. Consequently, users must consider the benefits given the risks (privacy calculus); however, users often opt for wearables despite their disclosure concerns (privacy paradox). In this study, we investigate the multidimensional role that privacy (and, in particular, the privacy calculus and the privacy paradox) plays in consumers’ intention to disclose their personal information, whether health status has a moderating effect on the relationship, and the influence of privacy on acceptance. To do so, we evaluated a research model that explicitly focused on the privacy calculus and the privacy paradox in the healthcare wearables acceptance domain. We used a survey-oriented approach to collect data from 225 users and examined relationships among privacy, health, and acceptance constructs. In that regard, our research confirmed significant evidence of the influence of the privacy calculus on disclosure and acceptance as well as evidence of the privacy paradox when considering health status. We found that consumers felt less inclined to disclose their personal information when the risks to privacy outweighed benefits; however, health status moderated this behavior such that people with worse health tipped the scale towards disclosure. This study expands our previous knowledge about healthcare wearables’ privacy/acceptance paradigm and, thus, the influences that affect healthcare wearables’ acceptance in the privacy context

    THE PERSONALIZATION-PRIVACY PARADOX EXPLORED THROUGH A PRIVACY CALCULUS MODEL AND HOFSTEDE’S MODEL OF CULTURAL DIMENSIONS

    Get PDF
    The Personalization-Privacy Paradox is a relevant issue for companies today, as it deals with the paradox of customers who on the one hand want to keep their personal data private, but on the other hand desire the personalization benefits that can be gained by giving up that privacy. Many studies in the past have observed the Personalization-Privacy Paradox, but not thoroughly through the lens of a privacy calculus model. This paper uses a privacy calculus model to examine the Personalization-Privacy Paradox using Hofstede’s Six Dimensions of Culture and examines the United States, Germany, and China as case studies of three different cultures. These three cultures all have a great deal of influence in the world and are world opinion leaders but have vast differences in cultural values and beliefs. This paper shows the importance for marketers, designers, and implementers of personalization services to understand diverse cultures and how their varied idioms, beliefs, and values affect how they will perceive benefits and costs of personalization services in their internal privacy calculus. The marked differences in cultural scores and how those cultural beliefs affect the perceptions of personalization and privacy demonstrate that companies looking to expand their services and applications into new markets cannot rely on universal approaches
    • …
    corecore