7 research outputs found

    Decoding Complexity: Exploring Human-AI Concordance in Qualitative Coding

    Full text link
    Qualitative data analysis provides insight into the underlying perceptions and experiences within unstructured data. However, the time-consuming nature of the coding process, especially for larger datasets, calls for innovative approaches, such as the integration of Large Language Models (LLMs). This short paper presents initial findings from a study investigating the integration of LLMs for coding tasks of varying complexity in a real-world dataset. Our results highlight the challenges inherent in coding with extensive codebooks and contexts, both for human coders and LLMs, and suggest that the integration of LLMs into the coding process requires a task-by-task evaluation. We examine factors influencing the complexity of coding tasks and initiate a discussion on the usefulness and limitations of incorporating LLMs in qualitative research

    Privacy Impact Assessments for Digital Repositories

    Get PDF
    Trustworthy data repositories ensure the security of their collections. We argue they should also ensure the security of researcher and human subject data. Here we demonstrate the use of a privacy impact assessment (PIA) to evaluate potential privacy risks to researchers using the ICPSR’s Open Badges Research Credential System as a case study. We present our workflow and discuss potential privacy risks and mitigations for those risks. [This paper is a conference pre-print presented at IDCC 2020 after lightweight peer review.]&nbsp

    Privacy Impact Assessments for Digital Repositories

    Full text link
    This is a preprint of a paper under review at the International Journal of Digital Curation. A shorter version of this paper was presented at the International Digital Curation Conference (IDCC) 2020 in Dublin, and is available here: http://www.ijdc.net/article/view/692Trustworthy data repositories ensure the security of their collections. We argue they should also ensure the privacy of researcher and human subject data. We demonstrate the use of a privacy impact assessment (PIA) to evaluate potential privacy risks to researchers using the ICPSR’sResearcher Passport as a case study. We present our workflow and discuss potential privacy risks and mitigations for those risks.National Science Foundation grant number 1839868http://deepblue.lib.umich.edu/bitstream/2027.42/163509/1/IDCC___IJDC_Open_Badges_paper_FULL_paper.pdfSEL

    Anticipating the Manipulative Risks of Advertising in Virtual Reality

    Full text link
    As Virtual Reality (VR) technologies become increasingly popular, so too will VR advertising - advertising that takes place in a VR medium. The defining features of VR devices, such as the immersiveness of VR and the ability of VR devices to recreate and replace reality, could be exploited to create manipulative VR advertisements that trick and deceive VR users. Even though VR advertisements are not yet mainstream, to understand and mitigate these risks it is imperative to study them now, rather than wait until VR advertising (and harms within them) are established and mainstream, and so difficult to address. In this thesis, I studied the risks that VR advertising poses. I focus on one specific risk, that of manipulation, and answer two research questions: (1) What are the manipulative risks that VR advertisements pose? and (2) What are VR users’ attitudes and concerns regarding VR advertisements? This thesis presents three studies that address these questions. In the first study, I used scenario construction to understand what are the key features of VR advertising and uncover key ways through which VR advertising can be manipulative. I highlight that VR advertisements will have increased immersiveness and increased realism; they will allow VR users to interact and preview products before buying them; and they will likely be hyperpersonalized and customized towards individual VR users. I subsequently discuss how these techniques can be used to manipulate VR users through the usage of misleading experience marketing, appeals to emotion, and targeting consumer vulnerabilities through hyperpersonalization. In the second study, I examined existing VR advertisements through walkthroughs to understand manipulative risks present in existing VR advertisements. I confirm the use of gamification and product previews in VR, and uncover three additional manipulative risks: the use of distressing events to advertise product (shockvertising); how VR advertising can allow users to embody of characters with certain traits; and a lack of appropriate exit options for VR. I also discover new risks, such as the risk for physical and emotional harms, and inconsistencies regarding how VR advertisements disclose their data practices. In the final study, I interviewed VR users (n=22) to understand VR user concerns regarding VR advertising. I find that the largest concern is with forced, unskippable VR advertisements and how in-app VR advertisements can interrupt the user experience and ruin the immersiveness of VR experiences. With regard to manipulation, participants were worried about how VR advertising might manipulate vulnerable populations (such as children or compulsive shoppers). However, many of our participants did not consider manipulation a concern or a serious risk. This was mediated by resignation towards manipulative advertising and an illusion of invulnerability. Through this work, I contribute a list of manipulative risks that VR advertisements present and contextualize these risks with how VR users perceive them. This in turn provides key insights to improve the VR advertising space and create VR ads that are non-manipulative and best align with VR users’ needs and wants.PhDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/192414/1/mhaidli_1.pd

    Listen Only When Spoken To: Interpersonal Communication Cues as Smart Speaker Privacy Controls

    No full text
    Internet of Things and smart home technologies pose challenges for providing effective privacy controls to users, as smart devices lack both traditional screens and input interfaces. We investigate the potential for leveraging interpersonal communication cues as privacy controls in the IoT context, in particular for smart speakers. We propose privacy controls based on two kinds of interpersonal communication cues – gaze direction and voice volume level – that only selectively activate a smart speaker’s microphone or voice recognition when the device is being addressed, in order to avoid constant listening and speech recognition by the smart speaker microphones and reduce false device activation. We implement these privacy controls in a smart speaker prototype and assess their feasibility, usability and user perception in two lab studies. We find that privacy controls based on interpersonal communication cues are practical, do not impair the smart speaker’s functionality, and can be easily used by users to selectively mute the microphone. Based on our findings, we discuss insights regarding the use of interpersonal cues as privacy controls for smart speakers and other IoT devices

    Novel Challenges of Safety, Security and Privacy in Extended Reality

    Get PDF
    Extended Reality (AR/VR/MR) technology is becoming increasingly affordable and capable, becoming ever more interwoven with everyday life. HCI research has focused largely on innovation around XR technology, exploring new use cases and interaction techniques, understanding how this technology is used and appropriated etc. However, equally important is the investigation and consideration of risks posed by such advances, specifically in contributing to new vulnerabilities and attack vectors with regards to security, safety, and privacy that are unique to XR. For example perceptual manipulations in VR, such as redirected walking or haptic retargeting, have been developed to enhance interaction, yet subversive use of such techniques has been demonstrated to unlock new harms, such as redirecting the VR user into a collision. This workshop will convene researchers focused on HCI, XR, Safety, Security, and Privacy, with the intention of exploring safety, privacy, and security challenges of XR technology. With an HCI lens, workshop participants will engage in critical assessment of emerging XR technologies and develop an XR research agenda that integrates research on interaction technologies and techniques with safety, security and privacy research
    corecore