648 research outputs found

    Automatic Indian Sign Language Recognition for Continuous Video Sequence

    Get PDF
    Sign Language Recognition has become the active area of research nowadays. This paper describes a novel approach towards a system to recognize the different alphabets of Indian Sign Language in video sequence automatically. The proposed system comprises of four major modules: Data Acquisition, Pre-processing, Feature Extraction and Classification. Pre-processing stage involves Skin Filtering and histogram matching after which Eigen vector based Feature Extraction and Eigen value weighted Euclidean distance based Classification Technique was used. 24 different alphabets were considered in this paper where 96% recognition rate was obtained.Keywords: Eigen value, Eigen vector, Euclidean Distance (ED),Human Computer Interaction, Indian Sign Language (ISL), Skin Filtering.Cite as:Joyeeta Singh, Karen Das "Automatic Indian Sign Language Recognition for Continuous Video Sequence", ADBU J.Engg.Tech., 2(1)(2015) 0021105(5pp

    Support Vector Machine based Image Classification for Deaf and Mute People

    Full text link
    A hand gesture recognition system provides a natural, innovative and modern way of nonverbal communication. It has a wide area of application in human computer interaction and sign language. The whole system consists of three components: hand detection, gesture recognition and human-computer interaction (HCI) based on recognition; in the existing technique, ANFIS(adaptive neuro-fuzzy interface system) to recognize gestures and makes it attainable to identify relatively complex gestures were used. But the complexity is high and performance is low. To achieve high accuracy and high performance with less complexity, a gray illumination technique is introduced in the proposed Hand gesture recognition. Here, live video is converted into frames and resize the frame, then apply gray illumination algorithm for color balancing in order to separate the skin separately. Then morphological feature extraction operation is carried out. After that support vector machine (SVM) train and testing process are carried out for gesture recognition. Finally, the character sound is played as audio output

    Implementation of Raspberry Pi Based Inteli Glove for Gesture to Voice Translation with Location Intimation for Deaf and Blind People

    Get PDF
    Communication plays an important role for human beings. Communication is treated as a life skill. This paper helps in improving the communication with the deaf and dumb using flex sensor technology. A device is developed that can translate different signs including Indian sign language to text as well as voice format. The people who are communicating with deaf and dumb may not understand their signs and expressions. Hence, an approach has been created and modified to hear the gesture based communication. It will be very helpful to them for conveying their thoughts to others.In the proposed system, RF module is used for transmitting and receiving the information and raspberry pi as a processor, GPS module is also used for blind people to identify their location. The entire framework has been executed, customized, cased and tried with great outcomes

    Classification of Sign-Language Using MobileNet - Deep Learning

    Get PDF
    Abstract: Sign language recognition is one of the most rapidly expanding fields of study today. Many new technologies have been developed in recent years in the fields of artificial intelligence the sign language-based communication is valuable to not only deaf and dumb community, but also beneficial for individuals suffering from Autism, downs Syndrome, Apraxia of Speech for correspondence. The biggest problem faced by people with hearing disabilities is the people's lack of understanding of their requirements. In this paper we try to fill this gap. By trying to translate sign language using artificial intelligence algorithms, we focused in this paper using transfer learning technique based on deep learning by utilizing a MobileNet algorithm and compared it with the previous paper results[10a], where we get in the Mobilenet algorithm on the degree of Accuracy 93,48% but the VGG16 the accuracy was 100% For the same number of images (43500 in the dataset in size 64*64 pixel ) and the same data split training data into training dataset (70%) and validation dataset(15%) and testing dataset(15%) and 20 epoch

    Handshape recognition using principal component analysis and convolutional neural networks applied to sign language

    Get PDF
    Handshape recognition is an important problem in computer vision with significant societal impact. However, it is not an easy task, since hands are naturally deformable objects. Handshape recognition contains open problems, such as low accuracy or low speed, and despite a large number of proposed approaches, no solution has been found to solve these open problems. In this thesis, a new image dataset for Irish Sign Language (ISL) recognition is introduced. A deeper study using only 2D images is presented on Principal Component Analysis (PCA) in two stages. A comparison between approaches that do not need features (known as end-to-end) and feature-based approaches is carried out. The dataset was collected by filming six human subjects performing ISL handshapes and movements. Frames from the videos were extracted. Afterwards the redundant images were filtered with an iterative image selection process that selects the images which keep the dataset diverse. The accuracy of PCA can be improved using blurred images and interpolation. Interpolation is only feasible with a small number of points. For this reason two-stage PCA is proposed. In other words, PCA is applied to another PCA space. This makes the interpolation possible and improves the accuracy in recognising a shape at a translation and rotation unknown in the training stage. Finally classification is done with two different approaches: (1) End-to-end approaches and (2) feature-based approaches. For (1) Convolutional Neural Networks (CNNs) and other classifiers are tested directly over raw pixels, whereas for (2) PCA is mostly used to extract features and again different algorithms are tested for classification. Finally, results are presented showing accuracy and speed for (1) and (2) and how blurring affects the accuracy
    corecore