34 research outputs found

    Recognition of nonmanual markers in American Sign Language (ASL) using non-parametric adaptive 2D-3D face tracking

    Full text link
    This paper addresses the problem of automatically recognizing linguistically significant nonmanual expressions in American Sign Language from video. We develop a fully automatic system that is able to track facial expressions and head movements, and detect and recognize facial events continuously from video. The main contributions of the proposed framework are the following: (1) We have built a stochastic and adaptive ensemble of face trackers to address factors resulting in lost face track; (2) We combine 2D and 3D deformable face models to warp input frames, thus correcting for any variation in facial appearance resulting from changes in 3D head pose; (3) We use a combination of geometric features and texture features extracted from a canonical frontal representation. The proposed new framework makes it possible to detect grammatically significant nonmanual expressions from continuous signing and to differentiate successfully among linguistically significant expressions that involve subtle differences in appearance. We present results that are based on the use of a dataset containing 330 sentences from videos that were collected and linguistically annotated at Boston University

    3D face tracking and multi-scale, spatio-temporal analysis of linguistically significant facial expressions and head positions in ASL

    Full text link
    Essential grammatical information is conveyed in signed languages by clusters of events involving facial expressions and movements of the head and upper body. This poses a significant challenge for computer-based sign language recognition. Here, we present new methods for the recognition of nonmanual grammatical markers in American Sign Language (ASL) based on: (1) new 3D tracking methods for the estimation of 3D head pose and facial expressions to determine the relevant low-level features; (2) methods for higher-level analysis of component events (raised/lowered eyebrows, periodic head nods and head shakes) used in grammatical markings—with differentiation of temporal phases (onset, core, offset, where appropriate), analysis of their characteristic properties, and extraction of corresponding features; (3) a 2-level learning framework to combine lowand high-level features of differing spatio-temporal scales. This new approach achieves significantly better tracking and recognition results than our previous methods

    Computer-based tracking, analysis, and visualization of linguistically significant nonmanual events in American Sign Language (ASL)

    Full text link
    Our linguistically annotated American Sign Language (ASL) corpora have formed a basis for research to automate detection by computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as periodic head movements and raised or lowered eyebrows. Here we summarize our linguistically motivated computational approach and the results for detection and recognition of nonmanual grammatical markings; demonstrate our data visualizations, and discuss the relevance for linguistic research; and describe work underway to enable such visualizations to be produced over large corpora and shared publicly on the Web

    Sign Language Tutoring Tool

    Full text link
    In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user's video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.Comment: eNTERFACE'06. Summer Workshop. on Multimodal Interfaces, Dubrovnik : Croatie (2007

    Multi-Modality American Sign Language Recognition

    Get PDF
    American Sign Language (ASL) is a visual gestural language which is used by many people who are deaf or hard-of-hearing. In this paper, we design a visual recognition system based on action recognition techniques to recognize individual ASL signs. Specifically, we focus on recognition of words in videos of continuous ASL signing. The proposed framework combines multiple signal modalities because ASL includes gestures of both hands, body movements, and facial expressions. We have collected a corpus of RBG + depth videos of multi-sentence ASL performances, from both fluent signers and ASL students; this corpus has served as a source for training and testing sets for multiple evaluation experiments reported in this paper. Experimental results demonstrate that the proposed framework can automatically recognize ASL

    Automatic Sign Language Recognition from Image Data

    Get PDF
    Tato práce se zabývá problematikou automatického rozpoznávání znakového jazyka z obrazových dat. Práce představuje pět hlavních přínosů v oblasti tvorby systému pro rozpoznávání, tvorby korpusů, extrakci příznaků z rukou a obličeje s využitím metod pro sledování pozice a pohybu rukou (tracking) a modelování znaků s využitím menších fonetických jednotek (sub-units). Metody využité v rozpoznávacím systému byly využity i k tvorbě vyhledávacího nástroje "search by example", který dokáže vyhledávat ve videozáznamech podle obrázku ruky. Navržený systém pro automatické rozpoznávání znakového jazyka je založen na statistickém přístupu s využitím skrytých Markovových modelů, obsahuje moduly pro analýzu video dat, modelování znaků a dekódování. Systém je schopen rozpoznávat jak izolované, tak spojité promluvy. Veškeré experimenty a vyhodnocení byly provedeny s vlastními korpusy UWB-06-SLR-A a UWB-07-SLR-P, první z nich obsahuje 25 znaků, druhý 378. Základní extrakce příznaků z video dat byla provedena na nízkoúrovňových popisech obrazu. Lepších výsledků bylo dosaženo s příznaky získaných z popisů vyšší úrovně porozumění obsahu v obraze, které využívají sledování pozice rukou a metodu pro segmentaci rukou v době překryvu s obličejem. Navíc, využitá metoda dokáže interpolovat obrazy s obličejem v době překryvu a umožňuje tak využít metody pro extrakci příznaků z obličeje, které by během překryvu nefungovaly, jako např. metoda active appearance models (AAM). Bylo porovnáno několik různých metod pro extrakci příznaků z rukou, jako např. local binary patterns (LBP), histogram of oriented gradients (HOG), vysokoúrovnové lingvistické příznaky a nové navržená metoda hand shape radial distance function (hRDF). Bylo také zkoumáno využití menších fonetických jednotek, než jsou celé znaky, tzv. sub-units. Pro první krok tvorby těchto jednotek byl navržen iterativní algoritmus, který tyto jednotky automaticky vytváří analýzou existujících dat. Bylo ukázáno, že tento koncept je vhodný pro modelování a rozpoznávání znaků. Kromě systému pro rozpoznávání je v práci navržen a představen systém "search by example", který funguje jako vyhledávací systém pro videa se záznamy znakového jazyka a může být využit například v online slovnících znakového jazyka, kde je v současné době složité či nemožné v takovýchto datech vyhledávat. Tento nástroj využívá metody, které byly použity v rozpoznávacím systému. Výstupem tohoto vyhledávacího nástroje je seřazený seznam videí, které obsahují stejný nebo podobný tvar ruky, které zadal uživatel, např. přes webkameru.Katedra kybernetikyObhájenoThis thesis addresses several issues of automatic sign language recognition, namely the creation of vision based sign language recognition framework, sign language corpora creation, feature extraction, making use of novel hand tracking with face occlusion handling, data-driven creation of sub-units and "search by example" tool for searching in sign language corpora using hand images as a search query. The proposed sign language recognition framework, based on statistical approach incorporating hidden Markov models (HMM), consists of video analysis, sign modeling and decoding modules. The framework is able to recognize both isolated signs and continuous utterances from video data. All experiments and evaluations were performed on two own corpora, UWB-06-SLR-A and UWB-07-SLR-P, the first containing 25 signs and second 378. As a baseline feature descriptors, low level image features are used. It is shown that better performance is gained by higher level features that employ hand tracking, which resolve occlusions of hands and face. As a side effect, the occlusion handling method interpolates face area in the frames during the occlusion and allows to use face feature descriptors that fail in such a case, for instance features extracted from active appearance models (AAM) tracker. Several state-of-the-art appearance-based feature descriptors were compared for tracked hands, such as local binary patterns (LBP), histogram of oriented gradients (HOG), high-level linguistic features or newly proposed hand shape radial distance function (denoted as hRDF) that enhances the feature description of hand-shape like concave regions. The concept of sub-units, that uses HMM models based on linguistic units smaller than whole sign and covers inner structures of the signs, was investigated in the proposed iterative method that is a first required step for data-driven construction of sub-units, and shows that such a concept is suitable for sign modeling and recognition tasks. Except of experiments in the sign language recognition, additional tool \textit{search by example} was created and evaluated. This tool is a search engine for sign language videos. Such a system can be incorporated into an online sign language dictionary where it is difficult to search in the sign language data. This proposed tool employs several methods which were examined in the sign language recognition task and allows to search in the video corpora based on an user-given query that consists of one or multiple images of hands. As a result, an ordered list of videos that contain the same or similar hand configurations is returned

    Reconhecimento de expressões faciais na língua de sinais brasileira por meio do sistema de códigos de ação facial

    Get PDF
    Orientadores: Paula Dornhofer Paro Costa, Kate Mamhy Oliveira KumadaTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Surdos ao redor do mundo usam a língua de sinais para se comunicarem, porém, apesar da ampla disseminação dessas línguas, os surdos ou indivíduos com deficiência auditiva ainda enfrentam dificuldades na comunicação com ouvintes, na ausência de um intérprete. Tais dificuldades impactam negativamente o acesso dos surdos à educação, ao mercado de trabalho e aos serviços públicos em geral. As tecnologias assistivas, como o Reconhecimento Automático de Língua de Sinais, do inglês Automatic Sign Language Recognition (ASLR), visam superar esses obstáculos de comunicação. No entanto, o desenvolvimento de sistemas ASLR confiáveis apresenta vários desafios devido à complexidade linguística das línguas de sinais. As línguas de sinais (LSs) são sistemas linguísticos visuoespaciais que, como qualquer outra língua humana, apresentam variações linguísticas globais e regionais, além de um sistema gramatical. Além disso, as línguas de sinais não se baseiam apenas em gestos manuais, mas também em marcadores não-manuais, como expressões faciais. Nas línguas de sinais, as expressões faciais podem diferenciar itens lexicais, participar da construção sintática e contribuir para processos de intensificação, entre outras funções gramaticais e afetivas. Associado aos modelos de reconhecimento de gestos, o reconhecimento da expressões faciais é um componente essencial da tecnologia ASLR. Neste trabalho, propomos um sistema automático de reconhecimento de expressões faciais para Libras, a língua brasileira de sinais. A partir de uma pesquisa bibliográfica, apresentamos um estudo da linguagem e uma taxonomia diferente para expressões faciais de Libras associadas ao sistema de codificação de ações faciais. Além disso, um conjunto de dados de expressões faciais em Libras foi criado. Com base em experimentos, a decisão sobre a construção do nosso sistema foi através de pré-processamento e modelos de reconhecimento. Os recursos obtidos para a classificação das ações faciais são resultado da aplicação combinada de uma região de interesse, e informações geométricas da face dado embasamento teórico e a obtenção de desempenho melhor do que outras etapas testadas. Quanto aos classificadores, o SqueezeNet apresentou melhores taxas de precisão. Com isso, o potencial do modelo proposto vem da análise de 77% da acurácia média de reconhecimento das expressões faciais de Libras. Este trabalho contribui para o crescimento dos estudos que envolvem a visão computacional e os aspectos de reconhecimento da estrutura das expressões faciais da língua de sinais, e tem como foco principal a importância da anotação da ação facial de forma automatizadaAbstract: Deaf people around the world use sign languages to communicate but, despite the wide dissemination of such languages, deaf or hard of hearing individuals still face difficulties in communicating with hearing individuals, in the absence of an interpreter. Such difficulties negatively impact the access of deaf individuals to education, to the job market, and to public services in general. Assistive technologies, such as Automatic Sign Language Recognition (ASLR), aim at overcoming such communication obstacles. However, the development of reliable ASLR systems imposes numerous challenges due the linguistic complexity of sign languages. Sign languages (SLs) are visuospatial linguistic systems that, like any other human language, present global and regional linguistic variations, and a grammatical system. Also, sign languages do not rely only on manual gestures but also non-manual markers, such as facial expressions. In SL, facial expressions may differentiate lexical items, participate in syntactic construction, and contribute to change the intensity of a sentence, among other grammatical and affective functions. Associated with the gesture recognition models, facial expression recognition (FER) is an essential component of ASLR technology. In this work, we propose an automatic facial expression recognition (FER) system for Brazilian Sign Language (Libras). Derived from a literature survey, we present a language study and a different taxonomy for facial expressions of Libras associated with the Facial Action Coding System (FACS). Also, a dataset of facial expressions in Libras was created. An experimental setting was done for the construction of our framework for a preprocessing stage and recognizer model. The features for the classification of the facial actions resulted from the application of a combined region of interest and geometric information given a theoretical basis and better performance than other tested steps. As for classifiers, SqueezeNet returned better accuracy rates. With this, the potential of the proposed model comes from the analysis of 77% of the average accuracy of recognition of Libras' facial expressions. This work contributes to the growth of studies that involve the computational vision and recognition aspects of the structure of sign language facial expressions, and its main focus is the importance of facial action annotation in an automated wayDoutoradoEngenharia de ComputaçãoDoutora em Engenharia Elétrica001CAPE

    Facial signals and social actions in multimodal face-to-face interaction

    Get PDF
    In a conversation, recognising the speaker’s social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information

    Facial signals and social actions in multimodal face-to-face interaction

    Get PDF
    In a conversation, recognising the speaker’s social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information
    corecore