16 research outputs found

    Live Captions in Virtual Reality (VR)

    Full text link
    Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (headlocked, lag, and appear) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participant preferences were split, but the majority of participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found headlocked and lag captions more user-friendly than appear captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best

    Automatic Speech Recogniton Systems as Tools to Enhance Spoken Communication in the Workplace

    Get PDF
    Abstract Text: The workplace presents many challenges for deaf and hard-of-hearing individuals, partially because a wide array of strategies to accommodate the communication needs of people with typical hearing, and flexibility in their use, are essential for upward mobility (Foster & Walter, 1992). Job-related demands also make the workplace a more difficult communication situation for those who are deaf compared to those who are hard of hearing (Boutin & Wilson, 2009). Both groups, however, tend to experience less success in securing higher level jobs than their peers with typical hearing and are limited by level of college degree (Kelly, Quagliata, DeMartino, & Perotti, 2015). For both deaf and hard-of-hearing workers, communication on the job reportedly involves English about 80% of the time, whether through writing, speech, or sign language with speech (Kelly et al., 2015). Given the spoken-language communication requirements of the workplace, to what extent does current speech recognition technology, especially as available in mobile apps, enhance access by deaf and hard-of-hearing individuals? Are speech recognition apps usable tools to enhance exchanges between deaf or hard-of-hearing persons and individuals who have typical hearing, whether it be a coworker or a boss? To investigate the capabilities of newer Automatic Speech Recognition (ASR) applications/software as tools to support auditory access of spoken communication, we asked 26 deaf and hard-of-hearing college students to use a variety of applications and software in everyday, job-related settings and to provide evaluative feedback on their experiences. In this workshop our evaluators\u27 findings will be shared. Additionally, participants will learn about outcomes trials with a beta app called Ava by Transcense Labs. AVA focuses on a seamless conversational experience for deaf and hard-of-hearing persons and is described as being like Siri, but for group conversations. The app shows a real-time, color-coded transcript of a discussion for use in situations such as meetings and on-the-job conferences. References: Boutin, D. L., and Wilson, K. B. (2009). Professional jobs and hearing loss: A comparison of deaf and hard of hearing consumers. Journal of Rehabilitation. 75(1): 36–40. Kelly, R., Quagliata, A., DeMartino, R., & Perotti, V. (2015). Deaf workers: Educated and employed, but limited in career growth. In Proceedings of the 22nd International Conference on Education of the Deaf. Athens, Greece

    Captions versus transcripts for online video content

    Get PDF
    ABSTRACT Captions provide deaf and hard of hearing (DHH) users ac cess to the audio component of web videos and television. While hearing consumers can watch and listen simultane ously, the transformation of audio to text requires deaf view ers to watch two simultaneous visual streams: the video and the textual representation of the audio. This can be a prob lem when the video has a lot of text or the content is dense, e.g., in Massively Open Online Courses. We explore the ef fect of providing caption history on users' ability to follow captions and be more engaged. We compare traditional onvideo captions that display a few words at a time to off-video transcripts that can display many more words at once, and investigate the trade off of requiring more effort to switch be tween the transcript and visuals versus being able to review more content history. We find significant difference in users' preferences for viewing video with on-screen captions over off-screen transcripts in terms of readability, but no signifi cant difference in users' preferences in following and under standing the video and narration content. We attribute this to viewers' perceived understanding significantly improving when using transcripts over captions, even if they were less easy to track. We then discuss the implications of these re sults for on-line education, and conclude with an overview of potential methods for combining the benefits of both onscreen captions and transcripts

    Breaking The Exclusionary Boundary Between User Experience And Access: Steps Toward Making UX Inclusive Of Users With Disabilities

    Get PDF
    This research paper points out that we as Designers have failed to come up with a model of UX that would proximate a satisfying user experience for users with disabilities. It underscores the gaps in designer knowledge about disabled bodies. The research paper also draws the attention of the designer community to the limited understanding we presently possess of the disabled people\u27s notions of, and expectations from, satisfying user experiences. It proposes a multi-step process for shifting the focus of design activity from a medical model of accessibility design that retrofits normative designs to the needs of users with disabilities to developing an accessible user experience model (AUX) of design that counts these users as design collaborators, possessors of special knowledge about disabled bodies, and untapped sources of innovative designs that might offer additional design features for all users

    The Right To Language

    Get PDF
    We argue for the existence of a state constitutional legal right to language. Our purpose here is to develop a legal framework for protecting the civil rights of the deaf child, with the ultimate goal of calling for legislation that requires all levels of government to fund programs for deaf children and their families to learn a fully accessible language: a sign language
    corecore