11 research outputs found

    Real Time Bangladeshi Sign Language Detection using Faster R-CNN

    Full text link
    Bangladeshi Sign Language (BdSL) is a commonly used medium of communication for the hearing-impaired people in Bangladesh. Developing a real time system to detect these signs from images is a great challenge. In this paper, we present a technique to detect BdSL from images that performs in real time. Our method uses Convolutional Neural Network based object detection technique to detect the presence of signs in the image region and to recognize its class. For this purpose, we adopted Faster Region-based Convolutional Network approach and developed a dataset −- BdSLImset −- to train our system. Previous research works in detecting BdSL generally depend on external devices while most of the other vision-based techniques do not perform efficiently in real time. Our approach, however, is free from such limitations and the experimental results demonstrate that the proposed method successfully identifies and recognizes Bangladeshi signs in real time.Comment: 6 pages, Accepted in International Conference on Innovation in Engineering and Technology (ICIET) 27-29 December, 2018, Dhaka, Banglades

    Reconnaissance automatique de gestes manuels en langue des signes

    No full text
    National audienceNous abordons dans ce papier la reconnaissance automa-tique de gestes manuels statiques pour des applications en langue des signes. Dans un travail prĂ©cĂ©dent, nous avons proposĂ© une mĂ©thode de reconnaissance de formes (classification et recherche) basĂ©e sur le recalage et les gĂ©odĂ©-siques de formes. Cette mĂ©thode est conçue pour ĂȘtre ro-buste aux points aberrants et aux variabilitĂ©s interindivi-duelles. Cette robustesse peut parfois mener Ă  des confusions lorsque nous travaillons avec des classes de formes Ă  faible dissemblance et donc Ă  faible sĂ©parabilitĂ©. La diffĂ©-rence entre ces classes de formes serait considĂ©rĂ©e comme donnĂ©e aberrante. Dans cet article, nous rĂ©visons notre mĂ©thode de reconnaissance de formes pour bien s'adap-ter aux classes Ă  faible dissemblance. Nous nous plaçons en particulier dans le contexte de la reconnaissance de gestes manuels. Les rĂ©sultats expĂ©rimentaux sur la base de rĂ©fĂ©rence GESTURES montrent tout l'intĂ©rĂȘt de notre approche. Mots Clefs Langue de signes, reconnaissance de formes, recalage, ro-bustesse. Abstract This paper deals with the static hand gesture recognition for sign language applications. In a previous work, we proposed a method of pattern recognition (classification and research) based on robust registration and shapes geode-sics. This robustness may lead sometimes to errors when dealing with databases with low variability among shape classes so low separability. The difference between these classes could be interpreted as aberrant data. In this paper , we revise and adjust our method to be adapted for classes with low dissimilarity. We consider particularly the problem of gesture recognition. Experimental results on the GESTURES test base show the advantage of our approach

    Anålise de aplicação de visão computacional e redes neurais, em conjunto com o uso de técnicas de aumento de dados, na tradução automåtica de libras

    Get PDF
    A LĂ­ngua Brasileira de Sinais (LIBRAS), uma lĂ­ngua de modalidade gestual-visual empregada pela comunidade surda no Brasil, enfrenta cotidianamente o desafio da barreira comunicacional entre surdos e ouvintes. Nesse contexto, este estudo busca desenvolver um sistema de visĂŁo computacional capaz de identificar sinais para auxiliar na tradução de LIBRAS para portuguĂȘs, visando aumentar a inclusĂŁo da comunidade surda atravĂ©s da comunicação. O escopo da pesquisa aborda fundamentos da LIBRAS, redes neurais com memĂłria de longo e curto prazo (Long Short-Term Memory - LSTM), e a tecnologia Mediapipe. AlĂ©m disso, o estudo compreendeu os treinamentos de 10 conjuntos, realizando modificaçÔes nos conjuntos de dados para avaliar o impacto nas mĂ©tricas de desempenho. As modificaçÔes realizadas nos conjuntos de dados foram realizadas atravĂ©s do espelhamento horizontal, translação e aumento do brilho das sequĂȘncias de imagens extraĂ­das de vĂ­deos contendo trĂȘs sinais distintos. A avaliação do sistema se deu por meio de mĂ©tricas de desempenho, incluindo a taxa de acerto na tradução dos gestos. Os melhores resultados foram obtidos com o Conjunto 9. Esse conjunto utilizou um grupo de dados gerado a partir de vĂ­deos de cinco indivĂ­duos, cada um executando cada sinal por dez vezes, com subsequente aplicação das trĂȘs tĂ©cnicas de aumento de dados avaliadas. Isso resultou em uma acurĂĄcia de 100% tanto no treinamento quanto na validação, indicando um potencial promissor das ferramentas e metodologias empregadas neste trabalho na tradução de lĂ­nguas de sinaisBrazilian Sign Language (LIBRAS) is a gestural-visual language used by the deaf community in Brazil that consistently faces the challenge of communicational barriers between deaf and hearing individuals. In this context, this study develops a computer vision system to translate LIBRAS into Portuguese, aiming to increase the inclusion of the deaf community through communication. The research scope covers the fundamentals of LIBRAS, long short-term memory (LSTM) neural networks, and Mediapipe technology. Moreover, the study comprised ten distinct training sets, applying data augmentation techniques to the datasets to assess their impact on the performance metrics. The data augmentation techniques applied to the image sequences extracted from the videos in which three different signs were recorded were horizontal mirroring, offset, and brightness increase. System evaluation was conducted through the analysis of performance metrics, including gesture translation accuracy rate. The best results were obtained by Set 9. This training used a dataset in comprised by videos of five individuals in whose each sign was performed ten times, followed by the use of the three data augmentation techniques evaluated. This resulted in a 100% accuracy rate for both training and validation, indicating good feasibility of translating sign languages with the tools and methodology employed in this wor

    Dipterocarps protected by Jering local wisdom in Jering Menduyung Nature Recreational Park, Bangka Island, Indonesia

    Get PDF
    Apart of the oil palm plantation expansion, the Jering Menduyung Nature Recreational Park has relatively diverse plants. The 3,538 ha park is located at the north west of Bangka Island, Indonesia. The minimum species-area curve was 0.82 ha which is just below Dalil conservation forest that is 1.2 ha, but it is much higher than measurements of several secondary forests in the Island that are 0.2 ha. The plot is inhabited by more than 50 plant species. Of 22 tree species, there are 40 individual poles with the average diameter of 15.3 cm, and 64 individual trees with the average diameter of 48.9 cm. The density of Dipterocarpus grandiflorus (Blanco) Blanco or kruing, is 20.7 individual/ha with the diameter ranges of 12.1 – 212.7 cm or with the average diameter of 69.0 cm. The relatively intact park is supported by the local wisdom of Jering tribe, one of indigenous tribes in the island. People has regulated in cutting trees especially in the cape. The conservation agency designates the park as one of the kruing propagules sources in the province. The growing oil palm plantation and the less adoption of local wisdom among the youth is a challenge to forest conservation in the province where tin mining activities have been the economic driver for decades. More socialization from the conservation agency and the involvement of university students in raising environmental awareness is important to be done
    corecore