2 research outputs found

    Elicitation and Empathy with AI-enhanced Adaptive Assistive Technologies (AATs)

    Get PDF
    Efforts to include people with disabilities in design education are difficult to scale, and dynamics of participation need to be carefully planned to avoid putting unnecessary burdens on users. However, given the scale of emerging AI-enhanced technologies and their potential for creating new vulnerabilities for marginalized populations, new methods for generating empathy and self-reflection in technology design students (as the future creators of such technologies) are needed. We report on a study with Information Systems graduate students where they used a participatory elicitation toolkit to reflect on two cases of end-user privacy perspectives towards AI-enhanced tools in the age of surveillance capitalism: their own when using tools to support learning, and those of older adults using AI-enhanced adaptive assistive technologies (AATs) that help with pointing and typing difficulties. In drawing on the experiences of students with intersectional identities, our exploratory study aimed to incorporate intersectional thinking in privacy elicitation and further understand its role in enabling sustainable, inclusive design practice and education. While aware of the risks to their own privacy and the role of identity and power in shaping experiences of bias, students who used the toolkit were more sanguine about risks faced by AAT users—assuming more data equates to better technology. Our tool proved valuable for eliciting reflection but not empathy

    Understanding How to Inform Blind and Low-Vision Users about Data Privacy through Privacy Question Answering Assistants

    Full text link
    Understanding and managing data privacy in the digital world can be challenging for sighted users, let alone blind and low-vision (BLV) users. There is limited research on how BLV users, who have special accessibility needs, navigate data privacy, and how potential privacy tools could assist them. We conducted an in-depth qualitative study with 21 US BLV participants to understand their data privacy risk perception and mitigation, as well as their information behaviors related to data privacy. We also explored BLV users' attitudes towards potential privacy question answering (Q&A) assistants that enable them to better navigate data privacy information. We found that BLV users face heightened security and privacy risks, but their risk mitigation is often insufficient. They do not necessarily seek data privacy information but clearly recognize the benefits of a potential privacy Q&A assistant. They also expect privacy Q&A assistants to possess cross-platform compatibility, support multi-modality, and demonstrate robust functionality. Our study sheds light on BLV users' expectations when it comes to usability, accessibility, trust and equity issues regarding digital data privacy.Comment: This research paper is accepted by USENIX Security '2
    corecore