7 research outputs found

    Perceived privacy risk in the Internet of Things: determinants, consequences, and contingencies in the case of connected cars

    Get PDF
    The Internet of Things (IoT) is permeating all areas of life. However, connected devices are associated with substantial risks to users’ privacy, as they rely on the collection and exploitation of personal data. The case of connected cars demonstrates that these risks may be more profound in the IoT than in extant contexts, as both a user's informational and physical space are intruded. We leverage this unique setting to collect rich context-immersive interview (n = 33) and large-scale survey data (n = 791). Our work extends prior theory by providing a better understanding of the formation of users’ privacy risk perceptions, the effect such perceptions have on users’ willingness to share data, and how these relationships in turn are affected by inter-individual differences in individuals’ regulatory focus, thinking style, and institutional trust

    Privacy Risk Perceptions in the Connected Car Context

    Get PDF
    Connected car services are rapidly diffusing as they promise to significantly enhance the overall driving experience. Because they rely on the collection and exploitation of car data, however, such services are associated with significant privacy risks. Following guidelines on contextualized theorizing, this paper examines how individuals perceive these risks and how their privacy risk perceptions in turn influence their decision-making, i.e., their willingness to share car data with the car manufacturer or other service providers. We conducted a multi-method study, including interviews and a survey in Germany. We found that individuals’ level of perceived privacy risk is determined by their evaluation of the general likelihood of IS-specific threats and the belief of personal exposure to such threats. Two cognitive factors, need for cognition and institutional trust, are found to moderate the effect that perceived privacy risk has on individuals’ willingness to share car data in exchange for connected car services

    Building Trust in Intelligent Automation: Insights into Structural Assurance Mechanisms for Autonomous Vehicles

    No full text
    Intelligent automation is increasingly taking over tasks that normally require substantial human experience and intuition. However, for individuals to delegate full control to applications like autonomous vehicles (AVs), they need to establish sufficient initial trust in the automation\u27s functionality, reliability and transparency. Manufacturers and external institutions may build users\u27 initial trust by providing structural assurance. Answering calls for a more context-specific, theoretically substantiated investigation of trust in AVs, we investigate how five different forms of structural assurance can be designed and how effective they are in trust-building: Technical, provider, legal, certifier and social protection. Extending previous, survey-based research, we conducted a choice-based conjoint experiment (n = 220). We find that external structural assurance in the form of legal and certifier protection may even outperform manufacturers\u27 trust-building efforts. This is especially the case for some user groups, as a cluster analysis reveals individuals\u27 heterogeneous preferences for structural assurance mechanisms

    Bibliography

    No full text
    corecore