10,176 research outputs found

    Comprehensibility of Newly Introduced Water-sport Prohibitive Signs in Korea by Koreans and Westerners

    Get PDF
    Objective: The goal of this study is to evaluate the comprehensibility of the newly introduced water-sport prohibitive signs by the Ministry of Knowledge Economy (MKE, later merged into the Ministry of Trade, Industry and Energy) among Koreans and westerners, and to check whether the comprehensibility is affected by cultural differences. Background: The Ministry of Knowledge Economy had newly introduced fourteen water-sport prohibitive signs at the end of 2011 to alert people to potentially dangerous situations. However, no studies had been found so far to review or assess their comprehensibility. Method: Comprehensibility tests of fourteen water-sport prohibitive signs were conducted with forty Koreans and forty Westerners in two sequential sessions. In session I, participants were asked to guess the meaning of each sign verbally in an open-ended test. In session II, participants were encouraged to provide feedback for each sign after its intended meaning was given. Results: Only two out of fourteen signs satisfied the comprehension rate (67%) recommended by ISO standard for both groups (Koreans and Westerners). Cultural difference between Koreans and westerners significantly affect the comprehension rates of the investigated signs, and Westerners exhibit better overall comprehension than Koreans. Five poorly comprehended signs for both Korean and Western groups were identified. Conclusion: The recently introduced water-sport prohibitive warning signs by MKE still need a lot of improvements in order to be implemented nationally or internationally. There were significant differences in the signs' comprehensibility between Koreans and westerners. Application: The findings may serve as a useful input for researchers and watersport sign designers in creating easy-to-comprehend safety signs.clos

    HAKA: HierArchical Knowledge Acquisition in a sign language tutor

    Get PDF
    Communication between people from different communities can sometimes be hampered by the lack of knowledge of each other's language. A large number of people needs to learn a language in order to ensure a fluid communication or want to do it just out of intellectual curiosity. To assist language learners' needs tutor tools have been developed. In this paper we present a tutor for learning the basic 42 hand configurations of the Spanish Sign Language, as well as more than one hundred of common words. This tutor registers the user image from an off-the-shelf webcam and challenges her to perform the hand configuration she chooses to practice. The system looks for the configuration, out of the 42 in its database, closest to the configuration performed by the user, and shows it to her, to help her to improve through knowledge of her errors in real time. The similarities between configurations are computed using Procrustes analysis. A table with the most frequent mistakes is also recorded and available to the user. The user may advance to choose a word and practice the hand configurations needed for that word. Sign languages have been historically neglected and deaf people still face important challenges in their daily activities. This research is a first step in the development of a Spanish Sign Language tutor and the tool is available as open source. A multidimensional scaling analysis of the clustering of the 42 hand configurations induced by Procrustes similarity is also presented.This work has been partially funded by the Basque Government, Spain, under Grant number IT1427-22; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning

    What’s your sign? Personal Name Signs in Mexican Sign Language

    Get PDF
    The use of a name/nickname/SIGN within the deaf community plays an important role, since it is a sign that identifies not only the deaf members of the same community, but also the listeners who are related to it. This sign is given by one of the members of the deaf community and implies both acceptance and identity within it. The creation of a personal sign in Mexican Sign Language responds to different ways of forming the name/nickname/sign that vary for each person. This research presents, first, the relevance of the proper name as a social element. Subsequently, the background on the study of personal signs in other sign languages. Finally, based on interviews with members of the deaf community, the three processes found for the formation of personal signs in Mexican Sign Language are shown: the assignment of signs by outstanding physical or behavioral traits; signs that have been inherited or that they share articulatory features in the family; and name sign change

    Order Of The Major Constituents In Sign Languages: Implications For All Language

    Get PDF
    A survey of reports of sign order from 42 sign languages leads to a handful of generalizations. Two accounts emerge, one amodal and the other modal. We argue that universal pressures are at work with respect to some generalizations, but that pressure from the visual modality is at work with respect to others. Together, these pressures conspire to make all sign languages order their major constituents SOV or SVO. This study leads us to the conclusion that the order of S with regard to verb phrase (VP) may be driven by sensorimotor system concerns that feed universal grammar

    Video Sign Language Recognition using Pose Extraction and Deep Learning Models

    Get PDF
    Sign language recognition (SLR) has long been a studied subject and research field within the Computer Vision domain. Appearance-based and pose-based approaches are two ways to tackle SLR tasks. Various models from traditional to current state-of-the-art including HOG-based features, Convolutional Neural Network, Recurrent Neural Network, Transformer, and Graph Convolutional Network have been utilized to tackle the area of SLR. While classifying alphabet letters in sign language has shown high accuracy rates, recognizing words presents its set of difficulties including the large vocabulary size, the subtleties in body motions and hand orientations, and regional dialects and variations. The emergence of deep learning has created opportunities for improved word-level sign recognition, but challenges such as overfitting and limited training data remain. Techniques such as data augmentation, feature engineering, hyperparameter tuning, optimization, and ensemble methods have been used to overcome these challenges and improve the accuracy and generalization ability of ASL classification models. We explore various methods to improve the accuracy and performance in this project. From the approach, we were able to first reproduce a baseline accuracy of 43.02% on the WLASL dataset and further achieve an improvement in accuracy at 55.96%. We also extended the work to a different dataset to gain a comprehensive understanding of our work
    corecore