42 research outputs found

    TR-2015001: A Survey and Critique of Facial Expression Synthesis in Sign Language Animation

    Full text link
    Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects

    Comparing Map Learning between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users

    Get PDF
    The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the learning and mental representation of digital maps, representing a key real-world translational eyes-free application. Two experiments involving 12 blind participants and 16 sighted participants compared cognitive map development and test performance on a range of spatio-behavioral tasks across three information-matched learning-mode conditions: (1) our prototype vibro-audio map (VAM), (2) traditional hardcopy-tactile maps, and (3) visual maps. Results demonstrated that when perceptual parameters of the stimuli were matched between modalities during haptic and visual map learning, test performance was highly similar (functionally equivalent) between the learning modes and participant groups. These results suggest equivalent cognitive map formation between both blind and sighted users and between maps learned from different sensory inputs, providing compelling evidence supporting the development of amodal spatial representations in the brain. The practical implications of these results include empirical evidence supporting a growing interest in the efficacy of multisensory interfaces as a primary interaction style for people both with and without vision. Findings challenge the long-held assumption that blind people exhibit deficits on global spatial tasks compared to their sighted peers, with results also providing empirical support for the methodological use of sighted participants in studies pertaining to technologies primarily aimed at supporting blind users

    EquiP: A Method to Co-Design for Cooperation

    Get PDF
    In Participatory Design (PD), the design of a cooperative digital solution should involve all stakeholders in the co-design. When one stakeholder’s position is weaker due to socio-cultural structures or differences in knowledge or abilities, PD methods should help designers balance the power in the design process at both the macro and micro levels. We present a PD method that addresses the power relations arising during the design process and draws on theories about participation and power in the design and organisation of change processes. We contribute to Computer Supported Cooperative Work (CSCW) by using the PD method to design computer support for cooperation on cognitive rehabilitation between people with Mild Acquired Brain Injuries (MACI) and their healthcare professionals, where strengthening the cooperation is considered an element of patient empowerment. This method is presented as a contribution to the intersection between PD and CSCW. The discussion of power in PD contributes to the discussion of cooperation in CSCW. We found that EquiP supported the creation of choices, and hence the ‘power to’ influence the design. This method can contribute to a power ‘equilibrium’ and a positive-sum power relation in PD sessions involving all stakeholders.publishedVersio

    A competencies framework of visual impairments for enabling shared understanding in design

    Get PDF
    Existing work in Human Computer Interaction and accessibility research has long sought to investigate the experiences of people with visual impairments in order to address their needs through technology design and integrate their participation into different stages of the design process. Yet challenges remain regarding how disabilities are framed in technology design and the extent of involvement of disabled people within it. Furthermore, accessibility is often considered a specialised job and misunderstandings or assumptions about visually impaired people’s experiences and needs occur outside dedicated fields. This thesis presents an ethnomethodology-informed design critique for supporting awareness and shared understanding of visual impairments and accessibility that centres on their experiences, abilities, and participation in early-stage design. This work is rooted in an in-depth empirical investigation of the interactional competencies that people with visual impairments exhibit through their use of technology, which informs and shapes the concept of a Competencies Framework of Visual Impairments. Although past research has established stances for considering the individual abilities of disabled people and other social and relational factors in technology design, by drawing on ethnomethodology and its interest in situated competence this thesis employs an interactional perspective to investigate the practical accomplishments of visually impaired people. Thus, this thesis frames visual impairments in terms of competencies to be considered in the design process, rather than a deficiency or problem to be fixed through technology. Accordingly, this work favours supporting awareness and reflection rather than the design of particular solutions, which are also strongly needed for advancing accessible design at large. This PhD thesis comprises two main empirical studies branched into three different investigations. The first and second investigations are based on a four-month ethnographic study with visually impaired participants examining their everyday technology practices. The third investigation comprises the design and implementation of a workshop study developed to include people with and without visual impairments in collaborative reflections about technology and accessibility. As such, each investigation informed the ones that followed, revisiting and refining concepts and design materials throughout the thesis. Although ethnomethodology is the overarching approach running through this PhD project, each investigation has a different focus of enquiry: • The first is focused on analysing participants’ technology practices and unearthing the interactional competencies enabling them. • The second is focused on analysing technology demonstrations, which were a pervasive phenomenon recorded during fieldwork, and the work of demonstrating as exhibited by visually impaired participants. • Lastly, the third investigation defines a workshop approach employing video demonstrations and a deck of reflective design cards as building blocks for enabling shared understanding among people with and without visual impairments from different technology backgrounds; that is, users, technologists, designers, and researchers. Overall, this thesis makes several contributions to audiences within and outside academia, such as the detailed accounts of some of the main technology practices of people with visual impairments and the methodological analysis of demonstrations in empirical Human Computer Interaction and accessibility research. Moreover, the main contribution lies in the conceptualisation of a Competencies Framework of Visual Impairments from the empirical analysis of interactional competencies and their practical exhibition through demonstrations, as well as the creation and use of a deck of cards that encapsulates the competencies and external elements involved in the everyday interactional accomplishments of people with visual impairments. All these contributions are lastly brought together in the implementation of the workshop approach that enabled participants to interact with and learn from each other. Thus, this thesis builds upon and advances contemporary strands of work in Human Computer Interaction that call for re-orienting how visual impairments and, overall, disabilities are framed in technology design, and ultimately for re-shaping the design practice itself

    A competencies framework of visual impairments for enabling shared understanding in design

    Get PDF
    Existing work in Human Computer Interaction and accessibility research has long sought to investigate the experiences of people with visual impairments in order to address their needs through technology design and integrate their participation into different stages of the design process. Yet challenges remain regarding how disabilities are framed in technology design and the extent of involvement of disabled people within it. Furthermore, accessibility is often considered a specialised job and misunderstandings or assumptions about visually impaired people’s experiences and needs occur outside dedicated fields. This thesis presents an ethnomethodology-informed design critique for supporting awareness and shared understanding of visual impairments and accessibility that centres on their experiences, abilities, and participation in early-stage design. This work is rooted in an in-depth empirical investigation of the interactional competencies that people with visual impairments exhibit through their use of technology, which informs and shapes the concept of a Competencies Framework of Visual Impairments. Although past research has established stances for considering the individual abilities of disabled people and other social and relational factors in technology design, by drawing on ethnomethodology and its interest in situated competence this thesis employs an interactional perspective to investigate the practical accomplishments of visually impaired people. Thus, this thesis frames visual impairments in terms of competencies to be considered in the design process, rather than a deficiency or problem to be fixed through technology. Accordingly, this work favours supporting awareness and reflection rather than the design of particular solutions, which are also strongly needed for advancing accessible design at large. This PhD thesis comprises two main empirical studies branched into three different investigations. The first and second investigations are based on a four-month ethnographic study with visually impaired participants examining their everyday technology practices. The third investigation comprises the design and implementation of a workshop study developed to include people with and without visual impairments in collaborative reflections about technology and accessibility. As such, each investigation informed the ones that followed, revisiting and refining concepts and design materials throughout the thesis. Although ethnomethodology is the overarching approach running through this PhD project, each investigation has a different focus of enquiry: • The first is focused on analysing participants’ technology practices and unearthing the interactional competencies enabling them. • The second is focused on analysing technology demonstrations, which were a pervasive phenomenon recorded during fieldwork, and the work of demonstrating as exhibited by visually impaired participants. • Lastly, the third investigation defines a workshop approach employing video demonstrations and a deck of reflective design cards as building blocks for enabling shared understanding among people with and without visual impairments from different technology backgrounds; that is, users, technologists, designers, and researchers. Overall, this thesis makes several contributions to audiences within and outside academia, such as the detailed accounts of some of the main technology practices of people with visual impairments and the methodological analysis of demonstrations in empirical Human Computer Interaction and accessibility research. Moreover, the main contribution lies in the conceptualisation of a Competencies Framework of Visual Impairments from the empirical analysis of interactional competencies and their practical exhibition through demonstrations, as well as the creation and use of a deck of cards that encapsulates the competencies and external elements involved in the everyday interactional accomplishments of people with visual impairments. All these contributions are lastly brought together in the implementation of the workshop approach that enabled participants to interact with and learn from each other. Thus, this thesis builds upon and advances contemporary strands of work in Human Computer Interaction that call for re-orienting how visual impairments and, overall, disabilities are framed in technology design, and ultimately for re-shaping the design practice itself

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye

    Apraxia World: Deploying a Mobile Game and Automatic Speech Recognition for Independent Child Speech Therapy

    Get PDF
    Children with speech sound disorders typically improve pronunciation quality by undergoing speech therapy, which must be delivered frequently and with high intensity to be effective. As such, clinic sessions are supplemented with home practice, often under caregiver supervision. However, traditional home practice can grow boring for children due to monotony. Furthermore, practice frequency is limited by caregiver availability, making it difficult for some children to reach therapy dosage. To address these issues, this dissertation presents a novel speech therapy game to increase engagement, and explores automatic pronunciation evaluation techniques to afford children independent practice. Children with speech sound disorders typically improve pronunciation quality by undergoing speech therapy, which must be delivered frequently and with high intensity to be effective. As such, clinic sessions are supplemented with home practice, often under caregiver supervision. However, traditional home practice can grow boring for children due to monotony. Furthermore, practice frequency is limited by caregiver availability, making it difficult for some children to reach therapy dosage. To address these issues, this dissertation presents a novel speech therapy game to increase engagement, and explores automatic pronunciation evaluation techniques to afford children independent practice. The therapy game, called Apraxia World, delivers customizable, repetition-based speech therapy while children play through platformer-style levels using typical on-screen tablet controls; children complete in-game speech exercises to collect assets required to progress through the levels. Additionally, Apraxia World provides pronunciation feedback according to an automated pronunciation evaluation system running locally on the tablet. Apraxia World offers two advantages over current commercial and research speech therapy games; first, the game provides extended gameplay to support long therapy treatments; second, it affords some therapy practice independence via automatic pronunciation evaluation, allowing caregivers to lightly supervise instead of directly administer the practice. Pilot testing indicated that children enjoyed the game-based therapy much more than traditional practice and that the exercises did not interfere with gameplay. During a longitudinal study, children made clinically-significant pronunciation improvements while playing Apraxia World at home. Furthermore, children remained engaged in the game-based therapy over the two-month testing period and some even wanted to continue playing post-study. The second part of the dissertation explores word- and phoneme-level pronunciation verification for child speech therapy applications. Word-level pronunciation verification is accomplished using a child-specific template-matching framework, where an utterance is compared against correctly and incorrectly pronounced examples of the word. This framework identified mispronounced words better than both a standard automated baseline and co-located caregivers. Phoneme-level mispronunciation detection is investigated using a technique from the second-language learning literature: training phoneme-specific classifiers with phonetic posterior features. This method also outperformed the standard baseline, but more significantly, identified mispronunciations better than student clinicians
    corecore