186,153 research outputs found
Improving Continuous Sign Language Recognition with Cross-Lingual Signs
This work dedicates to continuous sign language recognition (CSLR), which is
a weakly supervised task dealing with the recognition of continuous signs from
videos, without any prior knowledge about the temporal boundaries between
consecutive signs. Data scarcity heavily impedes the progress of CSLR. Existing
approaches typically train CSLR models on a monolingual corpus, which is orders
of magnitude smaller than that of speech recognition. In this work, we explore
the feasibility of utilizing multilingual sign language corpora to facilitate
monolingual CSLR. Our work is built upon the observation of cross-lingual
signs, which originate from different sign languages but have similar visual
signals (e.g., hand shape and motion). The underlying idea of our approach is
to identify the cross-lingual signs in one sign language and properly leverage
them as auxiliary training data to improve the recognition capability of
another. To achieve the goal, we first build two sign language dictionaries
containing isolated signs that appear in two datasets. Then we identify the
sign-to-sign mappings between two sign languages via a well-optimized isolated
sign language recognition model. At last, we train a CSLR model on the
combination of the target data with original labels and the auxiliary data with
mapped labels. Experimentally, our approach achieves state-of-the-art
performance on two widely-used CSLR datasets: Phoenix-2014 and Phoenix-2014T.Comment: Accepted by ICCV 202
BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues
Recent progress in fine-grained gesture and action classification, and
machine translation, point to the possibility of automated sign language
recognition becoming a reality. A key stumbling block in making progress
towards this goal is a lack of appropriate training data, stemming from the
high complexity of sign annotation and a limited supply of qualified
annotators. In this work, we introduce a new scalable approach to data
collection for sign recognition in continuous videos. We make use of
weakly-aligned subtitles for broadcast footage together with a keyword spotting
method to automatically localise sign-instances for a vocabulary of 1,000 signs
in 1,000 hours of video. We make the following contributions: (1) We show how
to use mouthing cues from signers to obtain high-quality annotations from video
data - the result is the BSL-1K dataset, a collection of British Sign Language
(BSL) signs of unprecedented scale; (2) We show that we can use BSL-1K to train
strong sign recognition models for co-articulated signs in BSL and that these
models additionally form excellent pretraining for other sign languages and
benchmarks - we exceed the state of the art on both the MSASL and WLASL
benchmarks. Finally, (3) we propose new large-scale evaluation sets for the
tasks of sign recognition and sign spotting and provide baselines which we hope
will serve to stimulate research in this area.Comment: Appears in: European Conference on Computer Vision 2020 (ECCV 2020).
28 page
K-RSL: a Corpus for Linguistic Understanding, Visual Evaluation, and Recognition of Sign Languages
The paper presents the first dataset that aims to serve interdisciplinary purposes for the utility of computer vision community and sign language linguistics. To date, a majority of Sign Language Recognition (SLR) approaches focus on recognising sign language as a manual gesture recognition problem. However, signers use other articulators: facial expressions, head and body position and movement to convey linguistic information. Given the important role of non-manual markers, this paper proposes a dataset and presents a use case to stress the importance of including non-manual features to improve the recognition accuracy of signs. To the best of our knowledge no prior publicly available dataset exists that explicitly focuses on non-manual components responsible for the grammar of sign languages. To this end, the proposed dataset contains 28250 videos of signs of high resolution and quality, with annotation of manual and nonmanual components. We conducted a series of evaluations in order to investigate whether non-manual components would improve signs’ recognition accuracy. We release the dataset to encourage SLR researchers and help advance current progress in this area toward realtime sign language interpretation. Our dataset will be made publicly available at https:// krslproject.github.io/krsl-corpuspublishedVersio
Detection of major ASL sign types in continuous signing for ASL recognition
In American Sign Language (ASL) as well as other signed languages, different classes of signs (e.g., lexical signs, fingerspelled signs, and classifier constructions) have different internal structural properties. Continuous sign recognition accuracy can be improved through use of distinct recognition strategies, as well as different training datasets, for each class of signs. For these strategies to be applied, continuous signing video needs to be segmented into parts corresponding to particular classes of signs. In this paper we present a multiple instance learning-based segmentation system that accurately labels 91.27% of the video frames of 500 continuous utterances (including 7 different subjects) from the publicly accessible NCSLGR corpus (Neidle and Vogler, 2012). The system uses novel feature descriptors derived from both motion and shape statistics of the regions of high local motion. The system does not require a hand tracker
Dataglove Measurement of Joint Angles in Sign Language Handshapes
In sign language research, we understand little about articulatory factors involved in shaping phonemic boundaries or the amount (and articulatory nature) of acceptable phonetic variation between handshapes. To date, there exists no comprehensive analysis of handshape based on the quantitative measurement of joint angles during sign production. The purpose of our work is to develop a methodology for collecting and visualizing quantitative handshape data in an attempt to better understand how handshapes are produced at a phonetic level. In this pursuit, we seek to quantify the flexion and abduction angles of the finger joints using a commercial data glove (CyberGlove; Immersion Inc.). We present calibration procedures used to convert raw glove signals into joint angles. We then implement those procedures and evaluate their ability to accurately predict joint angle. Finally, we provide examples of how our recording techniques might inform current research questions
ELAN as flexible annotation framework for sound and image processing detectors
Annotation of digital recordings in humanities research still is, to a largeextend, a process that is performed manually. This paper describes the firstpattern recognition based software components developed in the AVATecH projectand their integration in the annotation tool ELAN. AVATecH (AdvancingVideo/Audio Technology in Humanities Research) is a project that involves twoMax Planck Institutes (Max Planck Institute for Psycholinguistics, Nijmegen,Max Planck Institute for Social Anthropology, Halle) and two FraunhoferInstitutes (Fraunhofer-Institut fĂĽr Intelligente Analyse- undInformationssysteme IAIS, Sankt Augustin, Fraunhofer Heinrich-Hertz-Institute,Berlin) and that aims to develop and implement audio and video technology forsemi-automatic annotation of heterogeneous media collections as they occur inmultimedia based research. The highly diverse nature of the digital recordingsstored in the archives of both Max Planck Institutes, poses a huge challenge tomost of the existing pattern recognition solutions and is a motivation to makesuch technology available to researchers in the humanities
- …