486 research outputs found
A Survey of Applications and Human Motion Recognition with Microsoft Kinect
Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation
Continuous sign recognition of brazilian sign language in a healthcare setting
Communication is the basis of human society. The majority of people communicate using spoken language in oral or written form. However, sign language is the primary mode of communication for deaf people. In general, understanding spoken information is a major challenge for the deaf and hard of hearing. Access to basic information and essential services is challenging for these individuals. For example, without translation support, carrying out simple tasks in a healthcare center such as asking for guidance or consulting with a doctor, can be hopelessly difficult. Computer-based sign language recognition technologies offer an alternative to mitigate the communication barrier faced by the deaf and
hard of hearing. Despite much effort, research in this field is still in its infancy and automatic recognition of continuous signing remains a major challenge. This paper presents an ongoing research project designed to recognize continuous signing of Brazilian Sign Language (Libras) in healthcare settings. Health emergency situations and dialogues inspire the vocabulary of the signs and sentences we are using to contribute to the field301Vision-based human activity recognition8289COORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIOR - CAPESnão te
Review on Classification Methods used in Image based Sign Language Recognition System
Sign language is the way of communication among the Deaf-Dumb people by expressing signs. This paper is present review on Sign language Recognition system that aims to provide communication way for Deaf and Dumb pople. This paper describes review of Image based sign language recognition system. Signs are in the form of hand gestures and these gestures are identified from images as well as videos. Gestures are identified and classified according to features of Gesture image. Features are like shape, rotation, angle, pixels, hand movement etc. Features are finding by various Features Extraction methods and classified by various machine learning methods. Main pupose of this paper is to review on classification methods of similar systems used in Image based hand gesture recognition . This paper also describe comarison of various system on the base of classification methods and accuracy rate
Efficient Kinect Sensor-based Kurdish Sign Language Recognition Using Echo System Network
Sign language assists in building communication and bridging gaps in understanding. Automatic sign language recognition (ASLR) is a field that has recently been studied for various sign languages. However, Kurdish sign language (KuSL) is relatively new and therefore researches and designed datasets on it are limited. This paper has proposed a model to translate KuSL into text and has designed a dataset using Kinect V2 sensor. The computation complexity of feature extraction and classification steps, which are serious problems for ASLR, has been investigated in this paper. The paper proposed a feature engineering approach on the skeleton position alone to provide a better representation of the features and avoid the use of all of the image information. In addition, the paper proposed model makes use of recurrent neural networks (RNNs)-based models. Training RNNs is inherently difficult, and consequently, motivates to investigate alternatives. Besides the trainable long short-term memory (LSTM), this study has proposed the untrained low complexity echo system network (ESN) classifier. The accuracy of both LSTM and ESN indicates they can outperform those in state-of-the-art studies. In addition, ESN which has not been proposed thus far for ASLT exhibits comparable accuracy to the LSTM with a significantly lower training time
Desenvolvimento de uma abordagem para reconhecimento cont?nuo da L?ngua Brasileira de Sinais utilizando imagens din?micas e t?cnicas de aprendizagem profunda.
Programa de P?s-Gradua??o em Ci?ncia da Computa??o. Departamento de Ci?ncia da Computa??o, Instituto de Ci?ncias Exatas e Biol?gicas, Universidade Federal de Ouro Preto.Durante os ?ltimos anos, t?m sido desenvolvidas diversas abordagens
para o reconhecimento cont?nuo de l?nguas de sinais para melhorar a qualidade de vida das pessoas surdas e diminuir a barreira de
comunica??o entre elas e a sociedade. Analogamente, a incorpora??o do dispositivo Microsoft Kinect gerou uma revolu??o na ?rea
de vis?o computacional, fornecendo novas informa??es multimodais
(dados RGB-D e do esqueleto) que podem ser utilizadas para gerar ou
aprender novos descritores robustos e melhorar as taxas de reconhecimento em diversos problemas. Assim, nessa pesquisa de doutorado, apresenta-se uma metodologia para o reconhecimento de sinais cont?nuos da L?ngua Brasileira de Sinais (LIBRAS) utilizando como dados de entrada de um sinal as informa??es fornecidas pelo dispositivo Kinect. Diferentemente dos outros trabalhos na literatura, que utilizam
arquiteturas de redes mais complexas (como as 3DCNN e BLSTM), o
m?todo proposto utiliza janelas deslizantes para procurar segmentos
candidatos de serem sinais dentro de um fluxo continuo de video.
Do mesmo modo, prop?e-se o uso de imagens din?micas para codificar as informa??es espa?o-temporais fornecidas pelo Kinect. Assim, pode-se reduzir a complexidade da arquitetura CNN proposta para o
reconhecimento dos sinais.
Finalmente, baseado no conceito de pares m?nimos, um novo banco
de dados da L?ngua Brasileira de Sinais chamado LIBRAS-UFOP ?
proposto. A base LIBRAS-UFOP possui tanto sinais isolados (56 classes de sinais) como sinais cont?nuos (37 classes); n?s avaliamos nosso
m?todo usando essa base e o comparamos com os m?todos propostos
na literatura. Os resultados experimentais nos datasets LIBRAS-UFOP
e LSA64 demostraram a validade do m?todo proposto baseado em
imagens din?micas como uma alternativa para o reconhecimento de
l?ngua de sinais.In the last years, several approaches have been developed for continuous sign language recognition to improve the quality of life of hearingimpaired people and reduce the communication barrier between them
and society. Similarly, the incorporation of the Microsoft Kinect device
originated the computer vision revolution, providing new multimodal
information (RGB-D and skeleton data) that can be used to generate
or learn new robust descriptors and improve data recognition rates
in several problems. Thus, in this doctoral research, we propose a
methodology for the continuous recognition of Brazilian sign language (LIBRAS), using as input data from a sign the information
provided by the Kinect device. Unlike other works in the literature
that use more complex network architectures (such as 3DCNN and
BLSTM), the proposed method uses sliding windows to search for
candidate segments of being signs within a continuous flow of video.
Likewise, we proposed to use dynamic images to encode the spatiotemporal information provided by the Kinect. Thus, we can reduce
the complexity of the proposed CNN architecture for sign recognition.
Finally, based on the concept of minimal pairs, a new dataset
of Brazilian Sign Language called LIBRAS-UFOP is proposed. The
LIBRAS-UFOP dataset is composed of isolated signs (56 classes) and
continuous signs (37 classes); we evaluate our method on this dataset and compare it with state-of-the-art methods. The experimental
results on LIBRAS-UFOP and LSA64 datasets proved the feasibility of
the proposed method as an alternative to sign language recognition
Using games to make the process of learning sign language enjoyable and interactive
Conferência realizada em Wellington, na Nova Zelândia, de 8-10 de dezembro de 2014The work presented in this paper consists in the development of a game to make the process of learning sign language enjoyable and interactive. In this game the player controls a character that interacts with various objects and non-player characters with the aim of collecting several gestures from the Portuguese Sign Language. This interaction is supported by data gloves and Kinect. These gestures can then be represented by the character. This allows the user to visualize and learn or train the various existing gestures. To improve the interactivity and to make the game more interesting and motivating, several checkpoints were placed along game levels. This will provide the players a chance to test the knowledge they have acquired so far on the checkpoints by performing the signs using Kinect. A High Scores system was also created as well as a history to ensure that the game is a continuous motivating process as well as a learning process
Accessible options for deaf people in e-Learning platforms: technology solutions for sign language translation
AbstractThis paper presents a study on potential technology solutions for enhancing the communication process for deaf people on e-learning platforms through translation of Sign Language (SL). Considering SL in its global scope as a spatial-visual language not limited to gestures or hand/forearm movement, but also to other non-dexterity markers such as facial expressions, it is necessary to ascertain whether the existing technology solutions can be effective options for the SL integration on e-learning platforms. Thus, we aim to present a list of potential technology options for the recognition, translation and presentation of SL (and potential problems) through the analysis of assistive technologies, methods and techniques, and ultimately to contribute for the development of the state of the art and ensure digital inclusion of the deaf people in e-learning platforms. The analysis show that some interesting technology solutions are under research and development to be available for digital platforms in general, but yet some critical challenges must solved and an effective integration of these technologies in e-learning platforms in particular is still missing
Game design and the gamification of content : assessing a project for learning sign language
Comunicação apresentada na EDULEARN 2015, realizada em Barcelona de 6-8 de julho de 2015This paper discusses the concepts of game design and gamification of content, based on the development of a serious game aimed at making the process of learning sign language enjoyable and interactive. In this game the player controls a character that interacts with various objects and non- player characters, with the aim of collecting several gestures from the Portuguese Sign Language corpus. The learning model used pushes forward the concept of gamification as a learning process valued by students and teachers alike, and illustrates how it may be used as a personalized device for amplifying learning. Our goal is to provide a new methodology to involve students and general public in learning specific subjects using a ludic, participatory and interactive approach supported by ICT- based tools. Thus, in this paper we argue that perhaps some education processes could be improved by adding the gaming factor through technologies that are able to involve students in a way that is more physical (e.g. using Kinect and sensor gloves), so learning becomes more intense and memorable
- …