338,683 research outputs found

    Adaptation of a Vocabulary Test from British Sign Language to American Sign Language

    Get PDF
    This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with twenty deaf native ASL signers. The web-based test assesses the strength of deaf children’s vocabulary knowledge by means of different mappings of phonological form and meaning of signs. The adaptation from BSL to ASL involved nine stages, which included forming a panel of deaf/hearing experts, developing a set of new items and revising/replacing items considered ineffective, and piloting the new version. Results provide new evidence in support of the use of this methodology for assessing sign language, making a useful contribution toward the availability of tests to assess deaf children’s signed language skills

    A study of multimedia in an environmental science course

    Get PDF
    This study examined the use of sign movies, content movies, and adjunct questions in an instructional unit for Deaf students taking an environmental science course. Pre and post tests of factual recall as well as an attitude measure were administered to detennine if web-based technology benefits Deaf students taking environmental science courses. The results indicate that deaf students learning through web-based technology benefit from embedded American Sign Language explanations and questions as adjunct instructional aids. Implications of using webbased technology are discussed for science teachers having deaf students in their classes

    Learning to Generate Understandable Animations of American Sign Language

    Get PDF
    Motivations & Methods - Standardized testing has revealed that many deaf adults in the U.S. have lower levels of English literacy; providing American Sign Language (ASL) on websites can make information more accessible. Unfortunately, video recordings of human signers are difficult to update when information changes, and there is no way to support just-in-time generation of web content from a query

    A new web interface to facilitate access to corpora: development of the ASLLRP data access interface

    Full text link
    A significant obstacle to broad utilization of corpora is the difficulty in gaining access to the specific subsets of data and annotations that may be relevant for particular types of research. With that in mind, we have developed a web-based Data Access Interface (DAI), to provide access to the expanding datasets of the American Sign Language Linguistic Research Project (ASLLRP). The DAI facilitates browsing the corpora, viewing videos and annotations, searching for phenomena of interest, and downloading selected materials from the website. The web interface, compared to providing videos and annotation files off-line, also greatly increases access by people that have no prior experience in working with linguistic annotation tools, and it opens the door to integrating the data with third-party applications on the desktop and in the mobile space. In this paper we give an overview of the available videos, annotations, and search functionality of the DAI, as well as plans for future enhancements. We also summarize best practices and key lessons learned that are crucial to the success of similar projects

    The Web Magazine 1995, Summer

    Get PDF
    The Web Magazine focuses on alumni news and campus events from Gardner-Webb College; now Gardner-Webb University. This issue highlights yet another tragic passing in the GWU community. This comes in the person of Mr. E. Jerome Scott, who passed away suddenly at a faculty retreat. Mr. Bobby Lutz was named the new men\u27s basketball coach. The first class of the new School of Divinity officially graduated. GWU hosted the Olympics hopefuls, the US National Cycling Team. Rev. David Gomes (Gomez) received an honorary doctorate degree. GWU revealed its new School of Education, headed by Dr. Dee Hunt. The university also revealed a new American Sign Language set to debut in the 1995 Fall.https://digitalcommons.gardner-webb.edu/the-web/1147/thumbnail.jp

    WEB APPLICATION SIGN LANGUAGE TRANSLATOR

    Get PDF
    People with this disability use different modes to communicate with one another, there are some number of methods available for their communication and such a common method of communication is Gesture (sign language). The translation orderly replaces the original text in the live camera stream, matching the background and leading-edge colours estimated from the source images. A number of observations have also been carried out to determine a set of best tune-up for the development. The web based sign language translator is a system that allows the user to show a few words of the American Sign Language to a webcam and the webcam detects the words and translates it to text and this can help break the language barriers between the hearing impaired people and the ones that are not

    Optimizing Learner Accessibility: Adding American Sign Language (ASL) and Text-to-Speech to Online Trainings

    Get PDF
    The Child and Adolescent Needs and Strengths (CANS) Training Program is located at the Eunice K. Shriver Center at the University of Massachusetts Medical School in Worcester, MA. The CANS Training Program provides training and certification services for the Executive Office of Health and Human Services (EOHHS), MassHealth, Children\u27s Behavioral Health Initiative (CBHI). Massachusetts behavioral health providers are required to be CANS certified in order to see Medicaid insured children and youth under the age of 21. The CANS Training Program has trained and certified over 26,000 behavioral health providers throughout Massachusetts in the use of the Child and Adolescent Needs and Strengths (CANS) tool. The Mass CANS on-line training and certification program is designed for clinicians who provide behavioral health services to Massachusetts children and youth under the age of 21. The abilities, learning styles, and primary language spoken among providers is quite diverse. The CANS Training program, committed to providing content accessible to people of all abilities, and has added American Sign Language (ASL) and Text-to-Speech capabilities throughout the online training. These additions to the CANS accessibility toolbox help clinicians of all abilities get the most out of their online training and certification experience. Users may use American Sign Language (ASL) insets or closed captions while using the training videos. We will discuss the recent addition of ASL interpretation and Text-To-Speech functionality to the web-based training; discuss important considerations when improving accessibility; demonstrate the features and discuss our results

    Computer-based tracking, analysis, and visualization of linguistically significant nonmanual events in American Sign Language (ASL)

    Full text link
    Our linguistically annotated American Sign Language (ASL) corpora have formed a basis for research to automate detection by computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as periodic head movements and raised or lowered eyebrows. Here we summarize our linguistically motivated computational approach and the results for detection and recognition of nonmanual grammatical markings; demonstrate our data visualizations, and discuss the relevance for linguistic research; and describe work underway to enable such visualizations to be produced over large corpora and shared publicly on the Web
    • …
    corecore