2 research outputs found

    Continuous word level sign language recognition using an expert system based on machine learning

    No full text
    The study of sign language recognition systems has been extensively explored using many image processing and artificial intelligence techniques for many years, but the main challenge is to bridge the communication gap between specially-abled people and the general public. This paper proposes a python-based system that classifies 80 words from sign language. Two different models have been proposed in this work: You Only Look Once version 4 (YOLOv4) and Support Vector Machine (SVM) with media-pipe. SVM utilizes the linear, polynomial and Radial Basis Function (RBF) kernels. The system does not need any additional pre-processing and image enhancement operations. The image dataset used in this work is self-created and consists of 80 static signs with a total of 676 images. The accuracy of SVM with media-pipe is 98.62% and the accuracy of YOLOv4 obtained is 98.8% which is higher than the existing state-of-the-art methods. An expert system is also proposed which utilizes both the above models to predict the hand gesture more accurately in real-time
    corecore