6 research outputs found

    Review on Classification Methods used in Image based Sign Language Recognition System

    Get PDF
    Sign language is the way of communication among the Deaf-Dumb people by expressing signs. This paper is present review on Sign language Recognition system that aims to provide communication way for Deaf and Dumb pople. This paper describes review of Image based sign language recognition system. Signs are in the form of hand gestures and these gestures are identified from images as well as videos. Gestures are identified and classified according to features of Gesture image. Features are like shape, rotation, angle, pixels, hand movement etc. Features are finding by various Features Extraction methods and classified by various machine learning methods. Main pupose of this paper is to review on classification methods of similar systems used in Image based hand gesture recognition . This paper also describe comarison of various system on the base of classification methods and accuracy rate

    Mobile Application for Identification of Coffee Fruit Maturity using Digital Image Processing

    Get PDF
    Indonesia is an agricultural country that relies on the agricultural sector and is well known in producing various plantation commodities, one of which is coffee. Coffee is a leading export commodity developed in Indonesia. Community coffee plantations play an important role because most of the coffee production comes from community plantations. However, the condition of community coffee plantations can be said to be still hampered, due to the quality of coffee is still relatively low. It is caused by coffee fruit sorting, which is still done conventionally. The conventional sorting process of coffee fruits is still carried out with the help of operator knowledge, so the level of operator knowledge dramatically influences the results of sorting. The ease of sorting coffee ripeness can be done by implementing a mobile application using digital image processing. Techniques used in digital image processing are the HSV color space to get color features of coffee fruit and the K-Nearest Neighbor (KNN) classification method to classify coffee fruit ripeness. The results of the identification are in the form of ripe, half-ripe, or unripe fruits. The mobile application of this research has two main features, namely training data feature and non real-time identification feature. The results of the testing conducted resulted in an accuracy rate of 95.56% with the best membership value (k) of 3

    The passive operating mode of the linear optical gesture sensor

    Full text link
    The study evaluates the influence of natural light conditions on the effectiveness of the linear optical gesture sensor, working in the presence of ambient light only (passive mode). The orientations of the device in reference to the light source were modified in order to verify the sensitivity of the sensor. A criterion for the differentiation between two states: "possible gesture" and "no gesture" was proposed. Additionally, different light conditions and possible features were investigated, relevant for the decision of switching between the passive and active modes of the device. The criterion was evaluated based on the specificity and sensitivity analysis of the binary ambient light condition classifier. The elaborated classifier predicts ambient light conditions with the accuracy of 85.15%. Understanding the light conditions, the hand pose can be detected. The achieved accuracy of the hand poses classifier trained on the data obtained in the passive mode in favorable light conditions was 98.76%. It was also shown that the passive operating mode of the linear gesture sensor reduces the total energy consumption by 93.34%, resulting in 0.132 mA. It was concluded that optical linear sensor could be efficiently used in various lighting conditions.Comment: 10 pages, 14 figure

    GESTURE RECOGNITION FOR PENCAK SILAT TAPAK SUCI REAL-TIME ANIMATION

    Get PDF
    The main target in this research is a design of a virtual martial arts training system in real-time and as a tool in learning martial arts independently using genetic algorithm methods and dynamic time warping. In this paper, it is still in the initial stages, which is focused on taking data sets of martial arts warriors using 3D animation and the Kinect sensor cameras, there are 2 warriors x 8 moves x 596 cases/gesture = 9,536 cases. Gesture Recognition Studies are usually distinguished: body gesture and hand and arm gesture, head and face gesture, and, all three can be studied simultaneously in martial arts pencak silat, using martial arts stance detection with scoring methods. Silat movement data is recorded in the form of oni files using the OpenNI ™ (OFW) framework and BVH (Bio Vision Hierarchical) files as well as plug-in support software on Mocap devices. Responsiveness is a measure of time responding to interruptions, and is critical because the system must be able to meet the demand

    Um sistema portátil de tradução de posturas do alfabeto manual de libras em voz utilizando luva instrumentalizada com sensores IMU

    Get PDF
    Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2019.Surdos enfrentam dificuldades diariamente por problemas de comunicação, mesmo que sejam capazes física e intelectualmente de exercer praticamente qualquer tarefa que al- guma pessoa ouvinte exerceria. Tecnologias vestíveis estão se tornando cada vez mais comuns em dispositivos como smartwatches e smartbands, porém poucas dessas tecnolo- gias surgem com o objetivo de auxiliar os surdos. A Língua Brasileira de Sinais (Libras) é a língua oficial da comunidade surda no Brasil e é tão completa quanto qualquer língua oral. Pensado nisso, este trabalho propõe um sistema de baixo custo para tradução em tempo real de posturas manuais de Libras, mais especificamente a soletração do alfabeto manual, para português. O sistema proposto consiste em uma luva com cinco sensores que possuem acelerômetro e giroscópio, um em cada dedo, e um microcontrolador que se comunica via Bluetooth a um smartphone, que realiza o processamento dos dados obtidos pelos sensores em uma rede neural e classifica a postura manual realizada. O sistema proposto neste trabalho foi testado com três classificadores diferentes: i) Perceptron multicamadas (MLP); ii) K vizinhos mais próximos (KNN); e iii) Redes de funções de base radial (RBFN). O melhor desempenho em tempo real foi obtido pelo clas- sificador RBFN, com 99,84% de acurácia no conjunto de dados de teste. Além disso, foi obtida uma acurácia de 99,93% no MLP e 99,69% no KNN. Apesar de uma melhor acurá- cia, o MLP não se mostrou adequado para a utilização em tempo real porque não fornece um limiar muito confiável quando é fornecida uma entrada de uma classe desconhecida para a rede. Dessa forma, este protótipo se mostrou adequado para solucionar o problema de tradução de sinais de Libras, sendo sugerido que futuramente seja adaptado para novos sinais.Deaf people face difficulties daily due to communication problems, even if they are able physically and intelectually to perform virtually any task that a hearing person would perform. Wearable technologies are becoming increasingly common in devices such as smartwatches and smartbands, but few of these technologies emerge to help deaf people. Brazilian Sign Language (Libras) is the official language of the deaf community in Brazil and is as complete as any oral language. With this in mind, this work proposes a low cost system for real time translation of manual Libras postures, specifically the spelling of the manual alphabet, to Portuguese. The proposed system consists of a glove with five sensors that have accelerometer and gyroscope, one on each finger, and a microcontroller that communicates via Bluetooth to a smartphone, which processes the data obtained by the sensors in a neural network and classifies the manual Libras posture performed. The system proposed in this work was tested with three different classifiers: i) Multi- Layer Perceptron (MLP); ii) K-Nearest Neighbors (KNN); and iii) Radial Basis Function Networks (RBFN). The best real time performance was obtained by the RBFN classifier, with 99,84% accuracy in the test dataset. In addition, an accuracy of 99.83% in MLP and 99.69% in KNN was obtained. Despite its better accuracy, MLP was not suitable for real-time use because it does not provide a very reliable threshold when an input of an unknown class is provided to the network. Thus, this prototype proved to be adequate to solve the problem of Libras signal translation, and it is suggested that in the future it will be adapted to new signals

    Machine learning methods for sign language recognition: a critical review and analysis.

    Get PDF
    Sign language is an essential tool to bridge the communication gap between normal and hearing-impaired people. However, the diversity of over 7000 present-day sign languages with variability in motion position, hand shape, and position of body parts making automatic sign language recognition (ASLR) a complex system. In order to overcome such complexity, researchers are investigating better ways of developing ASLR systems to seek intelligent solutions and have demonstrated remarkable success. This paper aims to analyse the research published on intelligent systems in sign language recognition over the past two decades. A total of 649 publications related to decision support and intelligent systems on sign language recognition (SLR) are extracted from the Scopus database and analysed. The extracted publications are analysed using bibliometric VOSViewer software to (1) obtain the publications temporal and regional distributions, (2) create the cooperation networks between affiliations and authors and identify productive institutions in this context. Moreover, reviews of techniques for vision-based sign language recognition are presented. Various features extraction and classification techniques used in SLR to achieve good results are discussed. The literature review presented in this paper shows the importance of incorporating intelligent solutions into the sign language recognition systems and reveals that perfect intelligent systems for sign language recognition are still an open problem. Overall, it is expected that this study will facilitate knowledge accumulation and creation of intelligent-based SLR and provide readers, researchers, and practitioners a roadmap to guide future direction
    corecore