Article thumbnail

Recognition of sign language units from a video stream

By Pranciškus Ambrazas


In this study the recognition of Sign language units from a video stream is being analyzed. A dataset of 22 sign units with 20 videos for each class and 3 units with 50 videos for each class was created. Classes which had 20 videos was augmented to make 50 videos for each one of them. Based on this data, the convolutional neural network using Inception v3 model was trained. Further, based on convolutional neural network results, recurrent neural network was trained. Finally, three different models was created which are able to detect 3, 10 and 25 Lithuanian Sign language classes. The best results was presented by 10 classes model (87,34%), but 25 signs model has a higher usability within 79,18% accuracy. To sum up, these results give a possibility to adapt Lithuanian Sign language models in a real world. The usability of this model can be increased by further studies and trainings

Publisher: Institutional Repository of Vilnius University
Year: 2018
OAI identifier: oai:elaba:29548677
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.