105 research outputs found

    Parametric synthesis of sign language

    Get PDF
    The isolation of the deaf community from mainstream society is in part due to the lack of knowledge most hearing people have of sign language. To most, there seems to be little need to learn a language that is spoken by such a small minority unless perhaps a relative is unable to hear. Even with a desire to learn, the task may seem insurmountable due to the unique formational and grammatical rules of the language. This linguistic rift has led to the call for an automatic translation system with the ability to take voice or written text as input and produce a comprehensive sequence of signed gestures through computing. This thesis focused on the development of the foundation of a system that would receive English language input and generate a sequence of related signed gestures each synthesized from their basic kinematic parameters. A technique of sign specification for a computer-based translation system was developed through the use of Python objects and functions. Sign definitions, written as Python algorithms, were used to drive the simulation engine of a human-modeling software known as Jack. This research suggests that 3-dimensional computer graphics can be utilized in the production of sign representations that are intelligible and natural in appearance

    A Representation of Selected Nonmanual Signals in American Sign Language

    Get PDF
    Computer-generated three-dimensional animation holds great promise for synthesizing utterances in American Sign Language (ASL) that are not only grammatical, but believable by members of the Deaf community. Animation poses several challenges stemming from the massive amounts of data necessary to specify the movement of three-dimensional geometry, and there is no current system that facilitates the synthesis of nonmanual signals. However, the linguistics of ASL can aid in surmounting the challenge by providing structure and rules for organizing the data. This work presents a first method for representing ASL linguistic and extralinguistic processes that involve the face. Any such representation must be capable of expressing the subtle nuances of ASL. Further, it must be able to represent co-occurrences because many ASL signs require that two or more nonmanual signals be used simultaneously. In fact simultaneity of multiple nonmanual signals can occur on the same facial feature. Additionally, such a system should allow both binary and incremental nonmanual signals to display the full range of adjectival and adverbial modifiers. Validating such a representation requires both the affirmation that nonmanual signals are indeed necessary in the animation of ASL, and the evaluation of the effectiveness of the new representation in synthesizing nonmanual signals. In this study, members of the Deaf community viewed animations created with the new representation and answered questions concerning the influence of selected nonmanual signals on the perceived meaning of the synthesized utterances. Results reveal that, not only is the representation capable of effectively portraying nonmanual signals, but also that it can be used to combine various nonmanual signals in the synthesis of complete ASL sentences. In a study with Deaf users, participants viewing synthesized animations consistently identified the intended nonmanual signals correctly

    Emotional engineering of artificial representations of sign languages

    Get PDF
    The fascination and challenge of making an appropriate digital representation of sign language for a highly specialised and culturally rich community such as the Deaf, has brought about the development and production of several digital representations of sign language (DRSL). These range from pictorial depictions of sign language, filmed video recordings to animated avatars (virtual humans). However, issues relating to translating and representing sign language in the digital-domain and the effectiveness of various approaches, has divided the opinion of the target audience. As a result there is still no universally accepted digital representation of sign language. For systems to reach their full potential, researchers have postulated that further investigation is needed into the interaction and representational issues associated with the mapping of sign language into the digital domain. This dissertation contributes a novel approach that investigates the comparative effectiveness of digital representations of sign language within different information delivery contexts. The empirical studies presented have supported the characterisation of the prescribed properties of DRSL's that make it an effective communication system, which when defined by the Deaf community, was often referred to as "emotion". This has led to and supported the developed of the proposed design methodology for the "Emotional Engineering of Artificial Sign Languages", which forms the main contribution of this thesis

    American sign language finger challenge

    Get PDF
    In current websites and computer assisted learning programs, there are no interactive products that truly present real time fingerspelling in American Sign Language (ASL). At best, a site dislpays an array of illustrated static manual letters, which are displayed side by side like the roman alphabet letters to form a word. Another site flashes sequential photos of manual letters, as in the word, S-W-E-A-T-E-R , after which you type what you think you saw. In either case this is not real-time fingerspelling. Using the publication reference Expressive and Receptive Fingerspelling for Hearing Adults by Lavera M. Guillory, Macromedia Director MX 2004, this thesis is an interactive computer assisted instructional product, designed to improve students\u27 receptive abilities when using ASL fingerspelling. This was achieved by incorporating the transitions from letter to letter using real-time animation, and provides a realistic representation. Explored and created is a dynamic user experience that is clean, innovative and easy to navigate. Adobe Photoshop and Adobe Illustrator generate the simple gray on white vector line art from images extracted from video clips. The SWF animations were compiled in Flash, while Director was used to create challenges for one to seven letter words and finger combinations

    Towards an Integrative Information Society: Studies on Individuality in Speech and Sign

    Get PDF
    The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individual’s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers – to find out whether the results for signers would be the same as in speakers’ perception tests. One of the key findings within biology – and more precisely its effects on speech and communication research – is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informants’ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users – what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.Siirretty Doriast

    A Grammar of Italian Sign Language (LIS)

    Get PDF
    A Grammar of Italian Sign Language (LIS) is a comprehensive presentation of the grammatical properties of LIS. It has been conceived as a tool for students, teachers, interpreters, the Deaf community, researchers, linguists and whoever is interested in the study of LIS. It is one output of the Horizon 2020 SIGN-HUB project. It is composed of six Parts: Part 1 devoted to the social and historical background in which the language has developed, and five Parts covering the main properties of Phonology, Lexicon, Morphology, Syntax and Pragmatics. Thanks to the electronic format of the grammar, text and videos are highly interconnected and are designed to fit the description of a visual language

    Making an Online Dictionary of New Zealand Sign Language*

    Get PDF
    The Online Dictionary of New Zealand Sign Language (ODNZSL),1 launched in 2011, is n example of a contemporary sign language dictionary that leverages the 21st century advantages of a digital medium and an existing body of descriptive research on the language, including a small electronic corpus of New Zealand Sign Language. Innovations in recent online dictionaries of other signed languages informed development of this bilingual, bi-directional, multimedia dictionary. Video content and search capacities in an online medium are a huge advance in more directly representing a signed lexicon and enabling users to access content in versatile ways, yet do not resolve all of the theoretical challenges that face sign language dictionary makers. Considerations in the editing and production of the ODNZSL are discussed in this article, including issues of determining lexemes and word class in a polysynthetic language, deriving usage examples from a small corpus, and dealing with sociolinguistic variation in the selection and performance of content.Keywords: sign language lexicography, online dictionaries, multimedia dictionaries, bilingual dictionaries, learner dictionaries, new Zealand sign language, video content, sign language corpus, polysynthetic morphology, polysemy, sociolinguistic variation, sign language linguistics, user profil

    Machine learning methods for sign language recognition: a critical review and analysis.

    Get PDF
    Sign language is an essential tool to bridge the communication gap between normal and hearing-impaired people. However, the diversity of over 7000 present-day sign languages with variability in motion position, hand shape, and position of body parts making automatic sign language recognition (ASLR) a complex system. In order to overcome such complexity, researchers are investigating better ways of developing ASLR systems to seek intelligent solutions and have demonstrated remarkable success. This paper aims to analyse the research published on intelligent systems in sign language recognition over the past two decades. A total of 649 publications related to decision support and intelligent systems on sign language recognition (SLR) are extracted from the Scopus database and analysed. The extracted publications are analysed using bibliometric VOSViewer software to (1) obtain the publications temporal and regional distributions, (2) create the cooperation networks between affiliations and authors and identify productive institutions in this context. Moreover, reviews of techniques for vision-based sign language recognition are presented. Various features extraction and classification techniques used in SLR to achieve good results are discussed. The literature review presented in this paper shows the importance of incorporating intelligent solutions into the sign language recognition systems and reveals that perfect intelligent systems for sign language recognition are still an open problem. Overall, it is expected that this study will facilitate knowledge accumulation and creation of intelligent-based SLR and provide readers, researchers, and practitioners a roadmap to guide future direction
    • …
    corecore