63 research outputs found

    Multimedia Interfaces for BSL Using Lip Readers

    Get PDF

    How level and type of deafness affects user perception of multimedia video clips

    Get PDF
    Our research investigates the impact that hearing has on the perception of digital video clips, with and without captions, by discussing how hearing loss, captions and deafness type affects user QoP (Quality of Perception). QoP encompasses not only a user's satisfaction with the quality of a multimedia presentation, but also their ability to analyse, synthesise and assimilate informational content of multimedia . Results show that hearing has a significant effect on participants’ ability to assimilate information, independent of video type and use of captions. It is shown that captions do not necessarily provide deaf users with a ‘greater level of information’ from video, but cause a change in user QoP, depending on deafness type, which provides a ‘greater level of context of the video’. It is also shown that post-lingual mild and moderately deaf participants predict less accurately their level of information assimilation than post-lingual profoundly deaf participants, despite residual hearing. A positive correlation was identified between level of enjoyment (LOE) and self-predicted level of information assimilation (PIA), independent of hearing level or hearing type. When this is considered in a QoP quality framework, it puts into question how the user perceives certain factors, such as ‘informative’ and ‘quality’

    FAMAID: A TOOL FOR AIDING PEOPLE WITH DISABILITY

    Get PDF
    People with disabilities suffer from discrimination and obstacles that restrict them from participating in society on an equal basis with others every day. They are deprived of their rights to be included in ordinary school systems and even in the work market. In the process of raising awareness, facilitating dailyroutines, and developing guidance, the idea of assisting such people with handy tools/software arose and was implemented in the FamAid tool. FamAid offers people with hearing disability the opportunity to be engaged in the society through many facilities. In this work, we implemented a web application that serves as a community for people with disability who can use sign language to access the app. The app uses hand gesture recognition technique which is considered an active research field in Human-Computer Interaction technology to perform sign language translation to text. Afterwards, the text will be provided as input to the app where the output will be generated based on the request of the user. This research presents an application which is considered a gift for people with speaking and/or hearing disability as it makes their lives easier

    Content-prioritised video coding for British Sign Language communication.

    Get PDF
    Video communication of British Sign Language (BSL) is important for remote interpersonal communication and for the equal provision of services for deaf people. However, the use of video telephony and video conferencing applications for BSL communication is limited by inadequate video quality. BSL is a highly structured, linguistically complete, natural language system that expresses vocabulary and grammar visually and spatially using a complex combination of facial expressions (such as eyebrow movements, eye blinks and mouth/lip shapes), hand gestures, body movements and finger-spelling that change in space and time. Accurate natural BSL communication places specific demands on visual media applications which must compress video image data for efficient transmission. Current video compression schemes apply methods to reduce statistical redundancy and perceptual irrelevance in video image data based on a general model of Human Visual System (HVS) sensitivities. This thesis presents novel video image coding methods developed to achieve the conflicting requirements for high image quality and efficient coding. Novel methods of prioritising visually important video image content for optimised video coding are developed to exploit the HVS spatial and temporal response mechanisms of BSL users (determined by Eye Movement Tracking) and the characteristics of BSL video image content. The methods implement an accurate model of HVS foveation, applied in the spatial and temporal domains, at the pre-processing stage of a current standard-based system (H.264). Comparison of the performance of the developed and standard coding systems, using methods of video quality evaluation developed for this thesis, demonstrates improved perceived quality at low bit rates. BSL users, broadcasters and service providers benefit from the perception of high quality video over a range of available transmission bandwidths. The research community benefits from a new approach to video coding optimisation and better understanding of the communication needs of deaf people

    Sign Language Recognition

    Get PDF
    This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a précis of sign linguistics and their impact on the field. The types of data available and the relative merits are explored allowing examination of the features which can be extracted. Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages. Methods for combining the sign classification results into full SLR are given showing the progression towards speech recognition techniques and the further adaptations required for the sign specific case. Finally the current frontiers are discussed and the recent research presented. This covers the task of continuous sign recognition, the work towards true signer independence, how to effectively combine the different modalities of sign, making use of the current linguistic research and adapting to larger more noisy data set

    Comparing heterogeneous visual gestures for measuring the diversity of visual speech signals

    Get PDF
    Visual lip gestures observed whilst lipreading have a few working definitions, the most common two are: ‘the visual equivalent of a phoneme’ and ‘phonemes which are indistinguishable on the lips’. To date there is no formal definition, in part because to date we have not established a two-way relationship or mapping between visemes and phonemes. Some evidence suggests that visual speech is highly dependent upon the speaker. So here, we use a phoneme-clustering method to form new phoneme-to-viseme maps for both individual and multiple speakers. We test these phoneme to viseme maps to examine how similarly speakers talk visually and we use signed rank tests to measure the distance between individuals. We conclude that broadly speaking, speakers have the same repertoire of mouth gestures, where they differ is in the use of the gestures

    A study of process improvement activities for web development processes within a small company

    Get PDF
    This thesis describes activities carried out in order to improve a small company's web development process, specifically focusing on the areas of reuse and web accessibility. CACDP are the examinations board for British Sign Language and other related disciplines. Within the domain of web development they have no formal processes and no skills or knowledge with which to improve them. They wish to develop four new web based products, and to apply accessibility guidelines to both these and their existing web site. The areas of web development, process Improvement and reuse are investigated, specifically In relation to their suitability for CACDP, and an action list is drawn up of tasks that will assist them in achieving their aims. A formal process is defined and implemented in an iterative procedure, designed to gradually improve their working practices, and work towards achieving improvements in some of the Key Process Areas of the Capability Maturity Model. Reuse is targeted as a specific way to achieve efficiency within the development, and web accessibility is particularly important to CACDP as they work with many people who are affected by the lack of accessibility. The thesis describes the production of the applications using the defined process, and the problems faced during the implementation. These problems are reviewed and suggested improvements are integrated into the next implementation of the process. This project has resulted in the successful introduction of a formal process for the development of web-based applications. Reuse is now being used within the company to reduce cost and improve productivity. Accessibility standards have been implemented in all products. CACDP have benefited from increased services for their customers, increased profitability, mature development and maintenance procedures, the introduction of a reuse programme for their future development and technical learning and training for their staff

    Inclusive Augmented and Virtual Reality: A Research Agenda

    Get PDF
    Augmented and virtual reality experiences present significant barriers for disabled people, making it challenging to fully engage with immersive platforms. Whilst researchers have started to explore potential solutions addressing these accessibility issues, we currently lack a comprehensive understanding of research areas requiring further investigation to support the development of inclusive AR/VR systems. To address current gaps in knowledge, we led a series of multidisciplinary sandpits with relevant stakeholders (i.e., academic researchers, industry specialists, people with lived experience of disability, assistive technologists, and representatives from disability organisations, charities, and special needs educational institutions) to collaboratively explore research challenges, opportunities, and solutions. Based on insights shared by participants, we present a research agenda identifying key areas where further work is required in relation to specific forms of disability (i.e., across the spectrum of physical, visual, cognitive, and hearing impairments), including wider considerations associated with the development of more accessible immersive platforms

    Machine learning methods for sign language recognition: a critical review and analysis.

    Get PDF
    Sign language is an essential tool to bridge the communication gap between normal and hearing-impaired people. However, the diversity of over 7000 present-day sign languages with variability in motion position, hand shape, and position of body parts making automatic sign language recognition (ASLR) a complex system. In order to overcome such complexity, researchers are investigating better ways of developing ASLR systems to seek intelligent solutions and have demonstrated remarkable success. This paper aims to analyse the research published on intelligent systems in sign language recognition over the past two decades. A total of 649 publications related to decision support and intelligent systems on sign language recognition (SLR) are extracted from the Scopus database and analysed. The extracted publications are analysed using bibliometric VOSViewer software to (1) obtain the publications temporal and regional distributions, (2) create the cooperation networks between affiliations and authors and identify productive institutions in this context. Moreover, reviews of techniques for vision-based sign language recognition are presented. Various features extraction and classification techniques used in SLR to achieve good results are discussed. The literature review presented in this paper shows the importance of incorporating intelligent solutions into the sign language recognition systems and reveals that perfect intelligent systems for sign language recognition are still an open problem. Overall, it is expected that this study will facilitate knowledge accumulation and creation of intelligent-based SLR and provide readers, researchers, and practitioners a roadmap to guide future direction
    • 

    corecore