26 research outputs found

    Reconhecimento das configurações de mão da LIBRAS a partir de malhas 3D

    Get PDF
    Orientador: Prof. Dr. Daniel WeingaertnerDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Curso de Pós-Graduaçao em Informática. Defesa: Curitiba, 13/03/2013Bibliografia: fls. 68-73Resumo: O reconhecimento automático de sinais e um processo importante para uma boa utilização dos meios de comunicacão digitais por deficientes auditivos e, alem disso, favorece a comunicacao entre surdos e ouvintes que nao compreendem a língua de sinais. A abordagem de reconhecimento de sinais utilizada neste trabalho baseia-se nos parâmetros globais da LIBRAS - língua brasileira de sinais: configuracão de mão, locacao ou ponto de articulaçao, movimento, orientacao da palma da mao e expressão facial. A uniao de parâmetros globais forma sinais assim como fonemas formam palavras na língua falada. Este trabalho apresenta uma forma de reconhecer um dos parâmetros globais da LIBRAS, a configuracão de mao, a partir de malhas tridimensionais. A língua brasileira de sinais conta com 61 configuracoes de mao[16], este trabalho fez uso de uma base de dados contendo 610 vídeos de 5 usuarios distintos em duas tomadas, totalizando 10 capturas para cada configuracao de mao. De cada vídeo foram extraídos manualmente dois quadros retratando as visoes frontal e lateral da mao que, após segmentados e pré-processados, foram utilizados como entrada para o processamento de reconstrucao 3D. A geracao da malha 3D a partir das visães frontal e lateral da mão foi feita com o uso da tecnica de reconstruçao por silhueta[7]. O reconhecimento das configuracoes de mao a partir das malhas 3D foi feito com o uso do classificador SVM - Support Vector Machine. As características utilizadas para distinguir as malhas foram obtidas com o metodo Spherical Harmonics[25], um descritor de malhas 3D invariante à rotacao, translacao e escala. Os resultados atingiram uma taxa de acerto media de 98.52% com Ranking 5 demonstrando a eficiencia do metodo.Abstract: Automatic recognition of Sign Language signs is an important process that enhances the quality of use of digital media by hearing impaired people. Additionally, sign recognition enables a way of communication between deaf and hearing people who do not understand Sign Language. The approach of sign recognition used in this work is based on the global parameters of LIBRAS (Brazilian Sign Language): hand configuration, location or point of articulation, movement, palm orientation and facial expression. These parameters are combined to comprise signs, in a similar manner that phonemes are used to form words in spoken (oral) language. This paper presents a way to recognize one of the LIBRAS global parameters, the hand configuration, from 3D meshes. The Brazilian Sign Language has 61 hand configurations [16]. This work made use of a database containing 610 videos of 5 different users signing each hand configuration twice at distinct times, totaling 10 captures for each hand configuration. Two pictures depicting the front and the side views of the hand were manually extracted from each video. These pictures were segmented and pre-processed, after which they were used as input to the 3D reconstruction processing. The generation of the 3D meshes from the front and side images of the hand configuration was done using the Shape from Silhouette technique[7]. The recognition of the hand configurations from the 3D meshes was done with the use of SVM classifier - Support Vector Machine. The characteristics used to distinguish the mesh were obtained using the Spherical Harmonics [25] method: a 3D mesh descriptor that is rotation, translation and scale invariant. Results achieved a hit rate average of 98.52% with Rank 5, demonstrating the efficiency of the method

    Review on Classification Methods used in Image based Sign Language Recognition System

    Get PDF
    Sign language is the way of communication among the Deaf-Dumb people by expressing signs. This paper is present review on Sign language Recognition system that aims to provide communication way for Deaf and Dumb pople. This paper describes review of Image based sign language recognition system. Signs are in the form of hand gestures and these gestures are identified from images as well as videos. Gestures are identified and classified according to features of Gesture image. Features are like shape, rotation, angle, pixels, hand movement etc. Features are finding by various Features Extraction methods and classified by various machine learning methods. Main pupose of this paper is to review on classification methods of similar systems used in Image based hand gesture recognition . This paper also describe comarison of various system on the base of classification methods and accuracy rate

    Review of constraints on vision-based gesture recognition for human–computer interaction

    Get PDF
    The ability of computers to recognise hand gestures visually is essential for progress in human-computer interaction. Gesture recognition has applications ranging from sign language to medical assistance to virtual reality. However, gesture recognition is extremely challenging not only because of its diverse contexts, multiple interpretations, and spatio-temporal variations but also because of the complex non-rigid properties of the hand. This study surveys major constraints on vision-based gesture recognition occurring in detection and pre-processing, representation and feature extraction, and recognition. Current challenges are explored in detail

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Spatio-temporal centroid based sign language facial expressions for animation synthesis in virtual environment

    Get PDF
    Orientador: Eduardo TodtTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 20/02/2019Inclui referências: p.86-97Área de concentração: Ciência da ComputaçãoResumo: Formalmente reconhecida como segunda lingua oficial brasileira, a BSL, ou Libras, conta hoje com muitas aplicacoes computacionais que integram a comunidade surda nas atividades cotidianas, oferecendo interpretes virtuais representados por avatares 3D construidos utilizando modelos formais que parametrizam as caracteristicas especificas das linguas de sinais. Estas aplicacoes, contudo, ainda consideram expressoes faciais como recurso de segundo plano em uma lingua primariamente gestual, ignorando a importancia que expressoes faciais e emocoes imprimem no contexto da mensagem transmitida. Neste trabalho, a fim de definir um modelo facial parametrizado para uso em linguas de sinais, um sistema de sintese de expressoes faciais atraves de um avatar 3D e proposto e um prototipo implementado. Neste sentido, um modelo de landmarks faciais separado por regioes e definido assim como uma modelagem de expressoes base utilizando as bases faciais AKDEF e JAFEE como referencia. Com este sistema e possivel representar expressoes complexas utilizando interpolacao dos valores de intensidade na animacao geometrica, de forma simplificada utilizando controle por centroides e deslocamento de regioes independentes no modelo 3D. E proposto ainda uma aplicacao de modelo espaco-temporal para os landmarks faciais, com o objetivo de observar o comportamento e relacao dos centroides na sintese das expressoes base definindo quais pontos geometricos sao relevantes no processo de interpolacao e animacao das expressoes. Um sistema de exportacao dos dados faciais seguindo o formato hierarquico utilizado na maioria dos avatares 3D interpretes de linguas de sinais e desenvolvido, incentivando a integracao em modelos formais computacionais ja existentes na literatura, permitindo ainda a adaptacao e alteracao de valores e intensidades na representacao das emocoes. Assim, os modelos e conceitos apresentados propoe a integracao de um modeo facial para representacao de expressoes na sintese de sinais oferecendo uma proposta simplificada e otimizada para aplicacao dos recursos em avatares 3D. Palavras-chave: Avatar 3D, Dados Espaco-Temporal, Libras, Lingua de sinais, Expressoes Faciais.Abstract: Formally recognized as the second official Brazilian language, BSL, or Libras, today has many computational applications that integrate the deaf community into daily activities, offering virtual interpreters represented by 3D avatars built using formal models that parameterize the specific characteristics of sign languages. These applications, however, still consider facial expressions as a background feature in a primarily gestural language, ignoring the importance that facial expressions and emotions imprint on the context of the transmitted message. In this work, in order to define a parametrized facial model for use in sign languages, a system of synthesis of facial expressions through a 3D avatar is proposed and a prototype implemented. In this way, a model of facial landmarks separated by regions is defined as a modeling of base expressions using the AKDEF and JAFEE facial bases as a reference. With this system it is possible to represent complex expressions using interpolation of the intensity values in the geometric animation, in a simplified way using control by centroids and displacement of independent regions in the 3D model. A spatial-temporal model is proposed for the facial landmarks, with the objective of define the behavior and relation of the centroids in the synthesis of the basic expressions, pointing out which geometric landmark are relevant in the process of interpolation and animation of the expressions. A system for exporting facial data following the hierarchical format used in most avatars 3D sign language interpreters is developed, encouraging the integration in formal computer models already existent in the literature, also allowing the adaptation and change of values and intensities in the representation of the emotions. Thus, the models and concepts presented propose the integration of a facial model to represent expressions in the synthesis of signals offering a simplified and optimized proposal for the application of the resources in 3D avatars. Keywords: 3D Avatar, Spatio-Temporal Data, BSL, Sign Language, Facial Expression

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    Machine learning methods for sign language recognition: a critical review and analysis.

    Get PDF
    Sign language is an essential tool to bridge the communication gap between normal and hearing-impaired people. However, the diversity of over 7000 present-day sign languages with variability in motion position, hand shape, and position of body parts making automatic sign language recognition (ASLR) a complex system. In order to overcome such complexity, researchers are investigating better ways of developing ASLR systems to seek intelligent solutions and have demonstrated remarkable success. This paper aims to analyse the research published on intelligent systems in sign language recognition over the past two decades. A total of 649 publications related to decision support and intelligent systems on sign language recognition (SLR) are extracted from the Scopus database and analysed. The extracted publications are analysed using bibliometric VOSViewer software to (1) obtain the publications temporal and regional distributions, (2) create the cooperation networks between affiliations and authors and identify productive institutions in this context. Moreover, reviews of techniques for vision-based sign language recognition are presented. Various features extraction and classification techniques used in SLR to achieve good results are discussed. The literature review presented in this paper shows the importance of incorporating intelligent solutions into the sign language recognition systems and reveals that perfect intelligent systems for sign language recognition are still an open problem. Overall, it is expected that this study will facilitate knowledge accumulation and creation of intelligent-based SLR and provide readers, researchers, and practitioners a roadmap to guide future direction

    Handshape recognition using principal component analysis and convolutional neural networks applied to sign language

    Get PDF
    Handshape recognition is an important problem in computer vision with significant societal impact. However, it is not an easy task, since hands are naturally deformable objects. Handshape recognition contains open problems, such as low accuracy or low speed, and despite a large number of proposed approaches, no solution has been found to solve these open problems. In this thesis, a new image dataset for Irish Sign Language (ISL) recognition is introduced. A deeper study using only 2D images is presented on Principal Component Analysis (PCA) in two stages. A comparison between approaches that do not need features (known as end-to-end) and feature-based approaches is carried out. The dataset was collected by filming six human subjects performing ISL handshapes and movements. Frames from the videos were extracted. Afterwards the redundant images were filtered with an iterative image selection process that selects the images which keep the dataset diverse. The accuracy of PCA can be improved using blurred images and interpolation. Interpolation is only feasible with a small number of points. For this reason two-stage PCA is proposed. In other words, PCA is applied to another PCA space. This makes the interpolation possible and improves the accuracy in recognising a shape at a translation and rotation unknown in the training stage. Finally classification is done with two different approaches: (1) End-to-end approaches and (2) feature-based approaches. For (1) Convolutional Neural Networks (CNNs) and other classifiers are tested directly over raw pixels, whereas for (2) PCA is mostly used to extract features and again different algorithms are tested for classification. Finally, results are presented showing accuracy and speed for (1) and (2) and how blurring affects the accuracy

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems
    corecore