20 research outputs found
Viewing angle matters in British Sign Language processing
The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.</p
Video Relay Service for Deaf people using WebRTC
This paper reports on an experimental open
source video relay service prototype that helps Deaf people
communicate with hearing people by accessing a third party
sign language interpreter on a mobile device. Deaf people are
disadvantaged in many ways when communicating with the
hearing world in real world scenarios, such as hospital visits
and in cases of emergency. When possible, Deaf people can
enlist the assistance of a family member, community worker or
sign language interpreter to assist with such scenarios, however
this assistance is pre-arranged and Deaf people would prefer
on-the-fly assistance. Our application will assist Deaf people to
contact any available sign language interpreter to facilitate
communication between the Deaf person and a hearing person
using a split screen model, effectively creating a three-way
conversation between the Deaf person, the hearing person and
the sign language interpreter. The prototype was developed
using the WebRTC platform, with JavaScript for browser
operability and hardware platform independence. Our hope is
that the research can be used to persuade mobile network
operators of the need for free or heavily discounted data
connection to relay services for Deaf mobile customers
Assistive technologies for severe and profound hearing loss: beyond hearing aids and implants
Assistive technologies offer capabilities that were previously inaccessible to individuals with severe and profound hearing loss who have no or limited access to hearing aids and implants. This literature review aims to explore existing assistive technologies and identify what still needs to be done. It is found that there is a lack of focus on the overall objectives of assistive technologies. In addition, several other issues are identified i.e. only a very small number of assistive technologies developed within a research context have led to commercial devices, there is a predisposition to use the latest expensive technologies and a tendency to avoid designing products universally. Finally, the further development of plug-ins that translate the text content of a website to various sign languages is needed to make information on the internet more accessible
Guidelines for and evaluation of the design of technology-supported lessons to teach basic programming principles to deaf and hard of hearing learners: a case study of a school for the deaf
Deaf and Hard of Hearing (DHH) learners are part of a diverse population with unique learning challenges, strengths and needs. Learning material should be developed specifically for them to provide for their needs and capitalise on their strengths. These materials should include visual material and strategies as well as sign language. Furthermore, DHH learners have the same capacity for learning as hearing learners. However, in South Africa, DHH learners do not have adequate access to training in computer-related subjects, and therefore no material exists that has been developed specifically for DHH learners who want to learn a programming language. This research provides guidelines on the way technology-supported lessons can be designed to teach basic programming principles using the programming language Scratch, to DHH learners. Provision was made for the South African context where limited technology is available at most schools for DHH learners, but where most educators have access to Microsoft Office applications – specifically MS PowerPoint. Two goals were pursued. The primary goal of this research project was to determine the user experience (UX) of the participants (both learners and educators) during and after using and attending the technology-supported lessons. This was achieved through a case study. Four UX evaluation elements were evaluated in this project. They were: usability, accessibility, emotional user reaction, and hedonic aspects. Questionnaires, semi-structured interviews as well as participant-observation were used to determine the UX of participants. The UX evaluation provided sufficient evidence to claim that UX of participants was satisfactory, and therefore the guidelines that were developed to create technology-supported lessons to teach basic programming principles to DHH learners were appropriate. The secondary goal was to develop guidelines for the design of technology-supported lessons to teach programming to DHH learners, and to apply these guidelines to develop a high-fidelity, fully functional prototype – a set of technology-supported lessons. This was achieved through a prototype construction research strategy. The lessons consisted of two vocabulary lessons and one programming lesson. The words that were taught in the vocabulary lesson were either terms appearing in the interface of Scratch, or words needed in the explanation of programming principles and Scratch context. The programming lesson (a PowerPoint slide show) was a guide for the educator to present the content in a logical way, and not to leave out important information. It used multimedia techniques (colour, pictures, animation) to explain programming concepts, and to display the tasks to be completed to the learners, so that they could remember the sequence of the steps. Practical strategies have been included in the guidelines to address the learning challenges DHH experience in the following areas: Comprehension skills, application of knowledge and knowledge organisation, relational and individual-item orientations, metacognition, memory, distractibility. The guidelines referred to techniques and principles that can be followed to design the interface and navigation tools of a technology-supported lesson; enhance communication with DHH learners, and provide support for them to work independently; specify the educator’s role and attitude when facilitating or presenting programming lessons and to structure a programming lesson
Data-Driven Synthesis and Evaluation of Syntactic Facial Expressions in American Sign Language Animation
Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf and have low English literacy skills. State-of-art sign language animation tools focus mostly on accuracy of manual signs rather than on the facial expressions. We are investigating the synthesis of syntactic ASL facial expressions, which are grammatically required and essential to the meaning of sentences. In this thesis, we propose to: (1) explore the methodological aspects of evaluating sign language animations with facial expressions, and (2) examine data-driven modeling of facial expressions from multiple recordings of ASL signers. In Part I of this thesis, we propose to conduct rigorous methodological research on how experiment design affects study outcomes when evaluating sign language animations with facial expressions. Our research questions involve: (i) stimuli design, (ii) effect of videos as upper baseline and for presenting comprehension questions, and (iii) eye-tracking as an alternative to recording question-responses from participants. In Part II of this thesis, we propose to use generative models to automatically uncover the underlying trace of ASL syntactic facial expressions from multiple recordings of ASL signers, and apply these facial expressions to manual signs in novel animated sentences. We hypothesize that an annotated sign language corpus, including both the manual and non-manual signs, can be used to model and generate linguistically meaningful facial expressions, if it is combined with facial feature extraction techniques, statistical machine learning, and an animation platform with detailed facial parameterization. To further improve sign language animation technology, we will assess the quality of the animation generated by our approach with ASL signers through the rigorous evaluation methodologies described in Part I
SignDIn: Designing and assessing a generisable mobile interface for Sign support
SignSupport is a collaborative project between the Computer Science departments of the University of Cape Town (UCT) and the University of the Western Cape (UWC), South Africa. The intention of the software is to assist Deaf users to communicate with those who can hear in domain-specific scenarios.The penultimate version of this software is a mobile application that facilitates communication between Deaf patients and hearing pharmacists through the use of Sign Language videos stored locally on the mobile device. In this iteration, adding any new content to the mobile application necessitates redevelopment, and this is seen as a limitation to Sign Support. The architecture hinders the addition of new domains of use as well as extending the existing domains. This Dissertation presents the development and assessment of a new mobile application and data structure, together called SignDIn, and named as an amalgamation of the words 'Sign', Display' and'Input'. The data structure facilitates the easy transfer of information to the mobile application in such a way as to extend its use to new domains. The mobile application parses the data structure, and presents the information held therein to the user. In this development, the Dissertation sets out to address the following:1.How to develop a generalisable data structure that can be used in multiple contexts of Sign Language use.2. How to test and evaluate the resulting application to ensure that parsing the data structure does not hinder performance. The first objective of this research aims to develop a data structure in a generalised format so that itis applicable to multiple domains of use. Firstly, data structure technologies were evaluated and XML selected as the most appropriate out of three candidates (Relational Databases and JSON being the other two) with which to build the data structure. Information was collected from the International Computer Driver's Licence (ICDL) and Pharmacy domains and an XML data structure designed passing through three stages of development. The final outcome of the data structure comprises two XML types: display XMLs holding the display information in a general format of screen, video, image, capture and input; and input XMLs holding the list of input options available to users. The second objective is to test the performance of the mobile application to ensure that parsing the XML does not slow it down. Three XML parsers were evaluated, SAX Parsing, DOM Parsing, and the XML Pull Parser. These were evaluated against the time taken to parse a screen object as well as the screen object throughput per second. The XML Pull Parser is found to be the most efficient and appropriate for use in SignDin
Usability and content verification of a mobile tool to help a deaf person with pharmaceutical instruction
>Magister Scientiae - MScThis thesis describes a multi-disciplinary collaboration towards iterative development of
a mobile communication tool to support a Deaf person in understanding usage directions
for medication dispensed at a pharmacy. We are improving usability and correctness
of the user interface. The tool translates medicine instruction given in English text
to South African Sign Language videos, which are relayed to a Deaf user on a mobile
phone. Communication between pharmacists and Deaf patients were studied to extract
relevant exchanges between the two users. We incorporated the common elements of
these dialogues to represent content in a veri able manner to ensure that the mobile
tool relays the correct information to the Deaf user. Instructions are made available
for a Deaf patient in sign language videos on a mobile device. A pharmacy setup was
created to conduct trials of the tool with groups of end users, in order to collect usability
data with recorded participant observation, questionnaires and focus group discussions.
Subsequently, pre-recorded sign language videos, stored on a phone's memory card, were tested for correctness. Lastly we discuss the results and implications of the study and provide a conclusion to our research
Comparison and evaluation of mass video notification methods used to assist Deaf people
Magister Scientiae - MScIn South Africa, Deaf people communicate with one another and the broader community by means of South African Sign Language. The majority of Deaf people who have access to a mobile phone (cell phone) use Short Message Service (SMS) to communicate and share information with hearing people, but seldom use it among themselves. It is assumed that video messaging will be more accessible to Deaf people, since their level of literacy may prevent them from making effective use of information that is disseminated via texting/SMS. The principal objective of the esearch was to explore a cost-effective and efficient mass multimedia messaging system. The intention was to adapt a successful text-based mass notification system, developed by a local nongovernmental organization (NGO), to accommodate efficient and affordable video mass messaging for Deaf people. The questions that underpin this research are: How should video- streaming mass-messaging methods be compared and evaluated to find the most suitable method to deliver an affordable and acceptable service to Deaf people? What transport vehicles should be considered: Multimedia Message Service (MMS), the web, electronic mail, or a cell phone resident push/pullapplication? Which is the most cost effective? And, finally: How does the video quality of the various transport vehicles differ in terms of the clarity of the sign language as perceived by the Deaf? The soft-systems methodology and a mixed-methods methodology were used to address the research questions. The soft-systems methodology was followed to manage the research process and the mixed-methods research methodology was followed to collect data. Data was collected by means of experiments and semi-structured interviews. A prototype for mobile phone usage was developed and evaluated with Deaf members the NGO Deaf Community of Cape Town. The technology and internet usage of the Deaf participants provided background information. The Statistical Package for Social Science (SPSS) was used to analyse the quantitative data, and content analysis was used to analyse the documents and interviews. All of the Deaf participants used their mobile phones for SMS and the majority (81.25%) used English to type messages; however, all indicated that they would have preferred to use South Africa sign language on their mobile phones if it were available. And they were quite willing to pay between 75c and 80c per message for using such a video-messaging service.Of the transport vehicles demonstrated, most Deaf people indic indicated that they preferred to use the SMS prototype (with a web link to the video) rather than the MMS prototype with the video attached. They were, however, very concerned about the cost of using the system, as well as the quality of the sign language videos.South Afric