507 research outputs found

    How level and type of deafness affects user perception of multimedia video clips

    Get PDF
    Our research investigates the impact that hearing has on the perception of digital video clips, with and without captions, by discussing how hearing loss, captions and deafness type affects user QoP (Quality of Perception). QoP encompasses not only a user's satisfaction with the quality of a multimedia presentation, but also their ability to analyse, synthesise and assimilate informational content of multimedia . Results show that hearing has a significant effect on participants’ ability to assimilate information, independent of video type and use of captions. It is shown that captions do not necessarily provide deaf users with a ‘greater level of information’ from video, but cause a change in user QoP, depending on deafness type, which provides a ‘greater level of context of the video’. It is also shown that post-lingual mild and moderately deaf participants predict less accurately their level of information assimilation than post-lingual profoundly deaf participants, despite residual hearing. A positive correlation was identified between level of enjoyment (LOE) and self-predicted level of information assimilation (PIA), independent of hearing level or hearing type. When this is considered in a QoP quality framework, it puts into question how the user perceives certain factors, such as ‘informative’ and ‘quality’

    Impact of captions on deaf and hearing perception of multimedia video clips

    Get PDF
    We investigate the impact of captions on deaf and hearing perception of multimedia video clips. We measure perception using a parameter called Quality of Perception (QoP), which encompasses not only a user's satisfaction with multimedia clips, but also his/her ability to perceive, synthesise and analyse the informational content of such presentations. By studying perceptual diversity, it is our aim to identify trends that will help future implementation of adaptive multimedia technologies. Results show that although hearing level has a significant affect on information assimilation, the effect of captions is not significant on the objective level of information assimilated. Deaf participants predict that captions significantly improve their level of information assimilation, although no significant objective improvement was measured. The level of enjoyment is unaffected by a participant’s level of hearing or use of captions

    Composing music more accessible to the hearing-impaired

    Get PDF
    "A hearing-impaired individual's perception of the world is not devoid of music. Unable to hear sounds the way most people can, deaf persons often receive audio information through other means including: the tactile sensation of sound vibrations, residual hearing (the limited range of pitches that hearing-impaired persons may possess), and visual clues. Presenting existing music in a manner that takes these means of extended listening into account is not a new idea: visual representations of music through sign language or some form of mixed media have existed for many years, and in recent years technology has enhanced the experience of music through vibrotactile cues. However, there is little precedent for creating original music for the express purpose of appealing to those who are hearing-impaired. Taking these factors into account, a composer can create music that will be meaningful to someone with a hearing impairment. This document provides an insight into hearing-impaired individuals' perception of sound as it pertains to music. An evaluation of the abilities and limitations that a hearing-impairment creates is presented, and suggestions for creating original music that caters to a deaf audience are made. Application of these ideas results in an original work with the express purpose of appealing to the hearing-impaired. This work, Collapse, is explored and analyzed through the criteria defined by the rest of the paper; and at its conclusion, recommendations are made for further research and musical applications of this research. The score to Collapse is provided (Appendix A) with a synthesized recording for reference (Appendix B), and a video that may be used in the performance of this work is provided as well (Appendix C)"--From author-supplied metadata

    Toward A Theory of Media Reconciliation: An Exploratory Study of Closed Captioning

    Get PDF
    This project is an interdisciplinary empirical study that explores the emotional experiences resulting from the use of the assistive technology closed captioning. More specifically, this study focuses on documenting the user experiences of both the D/deaf and Hearing multimedia user in an effort to better identify and understand those variables and processes that are involved with facilitating and supporting connotative and emotional meaning making. There is an ever present gap that defines closed captioning studies thus far, and this gap is defined by the emphasis on understanding and measuring denotative meaning making behavior while largely ignoring connotative meaning making behavior that is necessarily an equal participant in a user\u27s viewing experience. This study explores connotative and emotional meaning making behaviors so as to better understand the behavior exhibited by users engaged with captioned multimedia. To that end, a mixed methods design was developed that utilizes qualitative methods from the field of User Experience (UX) to explore connotative equivalence between D/deaf and Hearing users and an augmented version of S. R. Gulliver and G. Ghinea\u27s (2003) quantitative measure Information Assimilation (IA) from the field of Human Computer Interaction (HCI) to measure the denotative equivalence between the two user types. To measure denotative equivalence a quiz containing open-ended questions to measure IA was used. To measure connotative equivalence the following measures were used: 1) Likert scales to measure users\u27 confidence in answers to open-ended questions. 2) Likert scale to measure a users\u27 interest in the stimulus. 3) Open - ended questions to identify scenes that elicited the strongest emotional responses from users. 4) Four- level response questions with accompanying Likert scales to determine strength of emotional reaction to three select excerpts from the stimulus. 5) An interview consisting of three open- ended questions and one fixed - choice question. This study found that there were no major differences in the denotative equivalence between the D/deaf and Hearing groups; however, there were important differences in the emotional reactions to the stimulus that indicate there was not connotative equivalence between the groups in response to the emotional content. More importantly, this study found that the strategies used to understand the information users were presented with in order to create both denotative and connotative meaning differed between groups and individuals within groups. To explain such behaviors observed, this work offers a theory of Media Reconciliation based on Wolfgang Iser\u27s (1980) phenomenological theory about the \u27virtual text\u27

    On the design of visual feedback for the rehabilitation of hearing-impaired speech

    Get PDF

    Emotional engineering of artificial representations of sign languages

    Get PDF
    The fascination and challenge of making an appropriate digital representation of sign language for a highly specialised and culturally rich community such as the Deaf, has brought about the development and production of several digital representations of sign language (DRSL). These range from pictorial depictions of sign language, filmed video recordings to animated avatars (virtual humans). However, issues relating to translating and representing sign language in the digital-domain and the effectiveness of various approaches, has divided the opinion of the target audience. As a result there is still no universally accepted digital representation of sign language. For systems to reach their full potential, researchers have postulated that further investigation is needed into the interaction and representational issues associated with the mapping of sign language into the digital domain. This dissertation contributes a novel approach that investigates the comparative effectiveness of digital representations of sign language within different information delivery contexts. The empirical studies presented have supported the characterisation of the prescribed properties of DRSL's that make it an effective communication system, which when defined by the Deaf community, was often referred to as "emotion". This has led to and supported the developed of the proposed design methodology for the "Emotional Engineering of Artificial Sign Languages", which forms the main contribution of this thesis

    An eye tracking study on the perception and comprehension of unimodal and bimodal linguistic inputs by deaf adolescents

    Get PDF
    An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf (n = 25) and hearing (n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lipreading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals – particularly native signers – mainly perceived signs through peripheral vision.European Union’s Seventh Framework Program for research, technological development and demonstration 31674
    corecore