37 research outputs found

    Robbing Peter to Pay Paul: Surrendering Privacy for Security’s Sake in an Identity Ecosystem

    Get PDF
    Despite individuals’ and organizations’ best efforts, many significant information security threats exist. To alleviate these threats, researchers and policy makers have proposed new digital environments called identity ecosystems. These ecosystems would provide protection against attackers in that a third party intermediary would need to authenticate users of the ecosystem. While the additional security may help alleviate security threats, significant concern exists regarding ecosystem users’ privacy. For example, the possibility of targeted attacks against the centralized identity repository, potential mismanagement of the verified credentials of millions of users, and the threat of activity monitoring and surveillance become serious privacy considerations. Thus, individuals must be willing to surrender personal privacy to a known intermediary to obtain the additional levels of protection that the proposed ecosystems suggest. We investigate the reasons why individuals would use a future identity ecosystem that exhibits such a privacy-security tradeoff. Specifically, we adopted a mixed-methods approach to elicit and assess the major factors associated with such decisions. We show that 1) intrapersonal characteristics, 2) perceptions of the controlling agent, and 3) perceptions of the system are key categories for driving intentions to use ecosystems. We found that trustworthiness of the controlling agent, perceived inconvenience, system efficacy, behavioral-based inertia, censorship attitude, and previous similar experience significantly explained variance in intentions. Interestingly, general privacy concerns failed to exhibit significant relationships with intentions in any of our use contexts. We discuss what these findings mean for research and practice and provide guidance for future research that investigates identity ecosystems and the AIS Bright ICT Initiative

    Mining App Reviews for Security and Privacy Research

    Get PDF

    A Meta-Analytic Review of More than a Decade of Research on General Computer Self-Efficacy: Research in Progress

    Get PDF
    In their seminal work, Compeau and Higgins (1995) provided the IS research community with a measure of computer selfefficacy (CSE) based on Bandura’s (1986) Social Cognitive Theory. The use of this CSE measure has since flourished within various academic literatures. Recent research interest (Marakas, Johnson, & Clay, 2007; Thatcher, Zimmer, Gundlach et al., 2008), however, challenges the continued application and analysis of Compeau and Higgins’ (1995) measure despite its widespread adoption. This paper presents the results of a meta-analysis of general CSE provided through the foundation of technology adoption research. The results should create future dialogue regarding general CSE and its application. We show evidence of moderate associations (r = |0.32| to |0.59|) of general CSE with several technology adoption research constructs. Guidance is offered for future moderator analyses, which may likely provide empirical evidence for either the support or refutation of current research claims in regard to general CSE

    Job Applicants’ Information Privacy Protection Responses: Using Social Media for Candidate Screening

    Get PDF
    For human resource (HR) departments, screening job applicants is an integral role in acquiring talent. Many HR departments have begun to turn to social networks to better understand job candidates’ character. Using social networks as a screening tool might provide insights not readily available from resumes or initial interviews. However, requiring access to an applicants’ social networks and the private activities occurring therein—a practice currently legal in 29 U.S. states (Deschenaux, 2015)—could induce strong moral reactions from the job candidates because of a perceived loss of information privacy. Subsequently, such disclosure requests could induce job candidates to respond in a multitude of ways to protect their privacy. Given that an estimated 2.55 billion individuals will use social media worldwide by 2017 (eMarketer, 2013), the repercussions from requests for access social media environments have potentially far-reaching effects. In this research, we examine how one such disclosure request impacted six information privacy protective responses (IPPRs) (Son & Kim, 2008) based on the job candidates’ perceived moral judgment and the perceived moral intensity of the HR disclosure request. These responses occurred when we asked respondents to provide personal login information during a hypothetical interview. By modeling data derived from a sample of 250 participants in PLS-SEM, we found that the five IPPRs (i.e., refusal, negative word of mouth, complaining to friends, complaining to the company, and complaining to third parties) were all significant responses when one judged the request to be immoral and perceived the moral intensity concept of immediate harm. The amount of variance explained by these five IPPRs ranged from 17.7 percent to 38.7 percent, which indicates a solid initial foundation from which future research can expand on this HR issue. Implications for academia and practice are discussed

    Job Applicants\u27 Information Privacy Protection Responses: Using Socia Media for Candidate Screening

    Get PDF
    For human resource (HR) departments, screening job applicants is an integral role in acquiring talent. Many HR departments have begun to turn to social networks to better understand job candidates’ character. Using social networks as a screening tool might provide insights not readily available from resumes or initial interviews. However, requiring access to an applicants’ social networks and the private activities occurring therein—a practice currently legal in 29 U.S. states (Deschenaux, 2015)—could induce strong moral reactions from the job candidates because of a perceived loss of information privacy. Subsequently, such disclosure requests could induce job candidates to respond in a multitude of ways to protect their privacy. Given that an estimated 2.55 billion individuals will use social media worldwide by 2017 (eMarketer, 2013), the repercussions from requests for access social media environments have potentially far-reaching effects. In this research, we examine how one such disclosure request impacted six information privacy protective responses (IPPRs) (Son & Kim, 2008) based on the job candidates’ perceived moral judgment and the perceived moral intensity of the HR disclosure request. These responses occurred when we asked respondents to provide personal login information during a hypothetical interview. By modeling data derived from a sample of 250 participants in PLS-SEM, we found that the five IPPRs (i.e., refusal, negative word of mouth, complaining to friends, complaining to the company, and complaining to third parties) were all significant responses when one judged the request to be immoral and perceived the moral intensity concept of immediate harm. The amount of variance explained by these five IPPRs ranged from 17.7 percent to 38.7 percent, which indicates a solid initial foundation from which future research can expand on this HR issue. Implications for academia and practice are discussed

    How Explanation Adequacy of Security Policy Changes Decreases Organizational Computer Abuse

    Get PDF
    We use Fairness Theory to help explain why sometimes security policy sometimes backfire and increase security violations. Explanation adequacy—a key component of Fairness Theory—is expected to increase employees’ trust in their organization. This trust should decrease internal computer abuse incidents following the implementation of security changes. The results of our analysis provide support for Fairness Theory as applied to our context of computer abuse. First, the simple act of giving employees advance notification for future information security changes positively influences employees’ perceptions of organizational communication efforts. The adequacy of these explanations is also buoyed by SETA programs. Second, explanation adequacy and SETA programs work in unison to foster organizational trust. Finally, organizational trust significantly decreases internal computer abuse incidents. Our findings show how organizational communication can influence the overall effectiveness of information security changes among employees and how organizations can avoid becoming victim to their own efforts

    Multiple Indicators and Multiple Causes (MIMIC) Models as a Mixed-Modeling Technique: A Tutorial and an Annotated Example

    Get PDF
    Formative modeling of latent constructs has produced great interest and discussion among scholars in recent years. However, confusion exists surrounding researchers’ ability to validate these models, especially with covariance-based structural equation modeling (CB-SEM) techniques. With this paper, we help to clarify these issues and explain how formatively modeled constructs can be assessed rigorously by researchers using CB-SEM capabilities. In particular, we explain and provide an applied example of a mixed-modeling technique termed multiple indicators and multiple causes (MIMIC) models. Using this approach, researchers can assess formatively modeled constructs as the final, distal dependent variable in CB-SEM structural models—something previously impossible because of CB-SEM’s mathematical identification rules. Moreover, we assert that researchers can use MIMIC models to assess the content validity of a set of formative indicators quantitatively—something considered conventionally only from a qualitative standpoint. The research example we use in this manuscript involving protection-motivated behaviors (PMBs) details the entire process of MIMIC modeling and provides a set of detailed guidelines for researchers to follow when developing new constructs modeled as MIMIC structures
    corecore