2,230 research outputs found
New Method for Optimization of License Plate Recognition system with Use of Edge Detection and Connected Component
License Plate recognition plays an important role on the traffic monitoring
and parking management systems. In this paper, a fast and real time method has
been proposed which has an appropriate application to find tilt and poor
quality plates. In the proposed method, at the beginning, the image is
converted into binary mode using adaptive threshold. Then, by using some edge
detection and morphology operations, plate number location has been specified.
Finally, if the plat has tilt, its tilt is removed away. This method has been
tested on another paper data set that has different images of the background,
considering distance, and angel of view so that the correct extraction rate of
plate reached at 98.66%.Comment: 3rd IEEE International Conference on Computer and Knowledge
Engineering (ICCKE 2013), October 31 & November 1, 2013, Ferdowsi Universit
Mashha
Alphabet Sign Language Recognition Using Leap Motion Technology and Rule Based Backpropagation-genetic Algorithm Neural Network (Rbbpgann)
Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%). Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language) in SIBI (Sign System of Indonesian Language) which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN), was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN). Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm
Towards Arabic Alphabet and Numbers Sign Language Recognition
This paper proposes to develop a new Arabic sign language recognition using Restricted Boltzmann Machines and a direct use of tiny images. Restricted Boltzmann Machines are able to code images as a superposition of a limited number of features taken from a larger alphabet. Repeating this process in deep architecture (Deep Belief Networks) leads to an efficient sparse representation of the initial data in the feature space. A complex problem of classification in the input space is thus transformed into an easier one in the feature space. After appropriate coding, a softmax regression in the feature space must be sufficient to recognize a hand sign according to the input image. To our knowledge, this is the first attempt that tiny images feature extraction using deep architecture is a simpler alternative approach for Arabic sign language recognition that deserves to be considered and investigated
Dynamic Hand Gesture Recognition of Arabic Sign Language using Hand Motion Trajectory Features
In this paper we propose a system for dynamic hand gesture recognition of Arabic Sign Language The proposed system takes the dynamic gesture video stream as input extracts hand area and computes hand motion features then uses these features to recognize the gesture The system identifies the hand blob using YCbCr color space to detect skin color of hand The system classifies the input pattern based on correlation coefficients matching technique The significance of the system is its simplicity and ability to recognize the gestures independent of skin color and physical structure of the performers The experiment results show that the gesture recognition rate of 20 different signs performed by 8 different signers is 85 6
Automatic recognition of Arabic alphabets sign language using deep learning
Technological advancements are helping people with special needs overcome many communications’ obstacles. Deep learning and computer vision models are innovative leaps nowadays in facilitating unprecedented tasks in human interactions. The Arabic language is always a rich research area. In this paper, different deep learning models were applied to test the accuracy and efficiency obtained in automatic Arabic sign language recognition. In this paper, we provide a novel framework for the automatic detection of Arabic sign language, based on transfer learning applied on popular deep learning models for image processing. Specifically, by training AlexNet, VGGNet and GoogleNet/Inception models, along with testing the efficiency of shallow learning approaches based on support vector machine (SVM) and nearest neighbors algorithms as baselines. As a result, we propose a novel approach for the automatic recognition of Arabic alphabets in sign language based on VGGNet architecture which outperformed the other trained models. The proposed model is set to present promising results in recognizing Arabic sign language with an accuracy score of 97%. The suggested models are tested against a recent fully-labeled dataset of Arabic sign language images. The dataset contains 54,049 images, which is considered the first large and comprehensive real dataset of Arabic sign language to the furthest we know
ALPHABET SIGN LANGUAGE RECOGNITION USING LEAP MOTION TECHNOLOGY AND RULE BASED BACKPROPAGATION-GENETIC ALGORITHM NEURAL NETWORK (RBBPGANN)
Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%). Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language) in SIBI (Sign System of Indonesian Language) which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN), was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN). Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm
INTERACTIVE EMIRATE SIGN LANGUAGE E-DICTIONARY BASED ON DEEP LEARNING RECOGNITION MODELS
According to the ministry of community development database in the United Arab Emirates (UAE) about 3065 people with disabilities are hearing disabled (Emirates News Agency - Ministry of Community Development). Hearing-impaired people find it difficult to communicate with the rest of society. They usually need Sign Language (SL) interpreters but as the number of hearing-impaired individuals grows the number of Sign Language interpreters can almost be non-existent. In addition, specialized schools lack a unified Sign Language (SL) dictionary, which can be linked to the Arabic language being of a diglossia nature, hence many dialects of the language co-exist. Moreover, there are not sufficient research work in Arabic SL in general, which can be linked to the lack of unification in the Arabic Sign Language. Hence, presenting an Emirate Sign Language (ESL) electronic Dictionary (e-Dictionary), consisting of four features, namely Dictation, Alpha Webcam, Vocabulary, and Spell, and two datasets (letters and vocabulary/sentences) to help the community in exploring and unifying the ESL. The vocabulary/sentences dataset was recorded by Azure Kinect and includes 127 signs and 50 sentences, making a total of 708 clips, performed by 4 Emirate signers with hearing loss. All the signs were reviewed by the head of the Community Development Authority in UAE for compliance. ESL e-Dictionary integrates state-of-the-art methods i.e., Automatic Speech recognition API by Google, YOLOv8 model trained on our dataset, and an algorithm inspired by bag of words model. Experimental results proved the usability of the e-Dictionary in real-time on laptops. The vocabulary/sentences dataset will be publicly offered in the near future for research purposes
- …