37,088 research outputs found

    Read My Lips: Continuous Signer Independent Weakly Supervised Viseme Recognition

    Full text link
    Abstract. This work presents a framework to recognise signer indepen-dent mouthings in continuous sign language, with no manual annotations needed. Mouthings represent lip-movements that correspond to pronun-ciations of words or parts of them during signing. Research on sign lan-guage recognition has focused extensively on the hands as features. But sign language is multi-modal and a full understanding particularly with respect to its lexical variety, language idioms and grammatical structures is not possible without further exploring the remaining information chan-nels. To our knowledge no previous work has explored dedicated viseme recognition in the context of sign language recognition. The approach is trained on over 180.000 unlabelled frames and reaches 47.1 % precision on the frame level. Generalisation across individuals and the influence of context-dependent visemes are analysed

    A Multi-modal Machine Learning Approach and Toolkit to Automate Recognition of Early Stages of Dementia among British Sign Language Users

    Get PDF
    The ageing population trend is correlated with an increased prevalence of acquired cognitive impairments such as dementia. Although there is no cure for dementia, a timely diagnosis helps in obtaining necessary support and appropriate medication. Researchers are working urgently to develop effective technological tools that can help doctors undertake early identification of cognitive disorder. In particular, screening for dementia in ageing Deaf signers of British Sign Language (BSL) poses additional challenges as the diagnostic process is bound up with conditions such as quality and availability of interpreters, as well as appropriate questionnaires and cognitive tests. On the other hand, deep learning based approaches for image and video analysis and understanding are promising, particularly the adoption of Convolutional Neural Network (CNN), which require large amounts of training data. In this paper, however, we demonstrate novelty in the following way: a) a multi-modal machine learning based automatic recognition toolkit for early stages of dementia among BSL users in that features from several parts of the body contributing to the sign envelope, e.g., hand-arm movements and facial expressions, are combined, b) universality in that it is possible to apply our technique to users of any sign language, since it is language independent, c) given the trade-off between complexity and accuracy of machine learning (ML) prediction models as well as the limited amount of training and testing data being available, we show that our approach is not over-fitted and has the potential to scale up

    Gesture and sign language recognition with temporal residual networks

    Get PDF

    Towards automatic sign language corpus annotation using deep learning

    No full text
    Sign classification in sign language corpora is a challenging problem that requires large datasets. Unfortunately, only a small portion of those corpora is labeled. To expedite the annotation process, we propose a gloss suggestion system based on deep learning. We improve upon previous research in three ways. Firstly, we use a proven feature extraction method called OpenPose, rather than learning end-to-end. Secondly, we propose a more suitable and powerful network architecture, based on GRU layers. Finally, we exploit domain and task knowledge to further increase the accuracy. We show that we greatly outperform the previous state of the art on the used dataset. Our method can be used for suggesting a top 5 of annotations given a video fragment that is selected by the corpus annotator. We expect that it will expedite the annotation process to the benefit of sign language translation research

    ModDrop: adaptive multi-modal gesture recognition

    Full text link
    We present a method for gesture detection and localisation based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at three temporal scales. Key to our technique is a training strategy which exploits: i) careful initialization of individual modalities; and ii) gradual fusion involving random dropping of separate channels (dubbed ModDrop) for learning cross-modality correlations while preserving uniqueness of each modality-specific representation. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams. Fusing multiple modalities at several spatial and temporal scales leads to a significant increase in recognition rates, allowing the model to compensate for errors of the individual classifiers as well as noise in the separate channels. Futhermore, the proposed ModDrop training technique ensures robustness of the classifier to missing signals in one or several channels to produce meaningful predictions from any number of available modalities. In addition, we demonstrate the applicability of the proposed fusion scheme to modalities of arbitrary nature by experiments on the same dataset augmented with audio.Comment: 14 pages, 7 figure
    • …
    corecore