17 research outputs found

    Automatic Identity Recognition Using Speech Biometric

    Get PDF
    Biometric technology refers to the automatic identification of a person using physical or behavioral traits associated with him/her. This technology can be an excellent candidate for developing intelligent systems such as speaker identification, facial recognition, signature verification...etc. Biometric technology can be used to design and develop automatic identity recognition systems, which are highly demanded and can be used in banking systems, employee identification, immigration, e-commerce…etc. The first phase of this research emphasizes on the development of automatic identity recognizer using speech biometric technology based on Artificial Intelligence (AI) techniques provided in MATLAB. For our phase one, speech data is collected from 20 (10 male and 10 female) participants in order to develop the recognizer. The speech data include utterances recorded for the English language digits (0 to 9), where each participant recorded each digit 3 times, which resulted in a total of 600 utterances for all participants. For our phase two, speech data is collected from 100 (50 male and 50 female) participants in order to develop the recognizer. The speech data is divided into text-dependent and text-independent data, whereby each participant selected his/her full name and recorded it 30 times, which makes up the text-independent data. On the other hand, the text-dependent data is represented by a short Arabic language story that contains 16 sentences, whereby every sentence was recorded by every participant 5 times. As a result, this new corpus contains 3000 (30 utterances * 100 speakers) sound files that represent the text-independent data using their full names and 8000 (16 sentences * 5 utterances * 100 speakers) sound files that represent the text-dependent data using the short story. For the purpose of our phase one of developing the automatic identity recognizer using speech, the 600 utterances have undergone the feature extraction and feature classification phases. The speech-based automatic identity recognition system is based on the most dominating feature extraction technique, which is known as the Mel-Frequency Cepstral Coefficient (MFCC). For feature classification phase, the system is based on the Vector Quantization (VQ) algorithm. Based on our experimental results, the highest accuracy achieved is 76%. The experimental results have shown acceptable performance, but can be improved further in our phase two using larger speech data size and better performance classification techniques such as the Hidden Markov Model (HMM)

    Automatic identity recognition systems : a review

    Get PDF
    Rapidly changed computer technology and fast growth of communication ways, makes everyday work easy and managed. Technology takes place everywhere, in business, education, market, security... etc. However, communication between human and these technologies become the main concern of many research areas, especially for developing automatic identity recognition systems. However, biometric technologies are among the most important technologies used in this area. Biometric technology refers to the automatic identity recognition using physical or behavioral traits associated with him/her. Using biometrics, it is possible to establish physiological-based systems that depend on physiological characteristics such as fingerprint, face recognition, DNA... etc, or behavioral-based systems that depend on behavioral characteristics such as gait, voice ...etc, or even combining both of them in one system. Therefore, biometrics technologies can be excellent candidates for developing intelligent systems such as speaker identification, facial recognition, signature verification...etc. In addition, biometric technologies are flexible enough to be combined with other tools to produce more secure and easier to use verification system

    Arabic automatic continuous speech recognition systems

    Get PDF
    MSA is the current formal linguistic standard of Arabic language, which is widely taught in schools and universities, and often used in the office and the media. MSA is also considered as the only acceptable form of Arabic language for all native speakers [I]. As recently, the research community has witnessed an improvement in the performance of ASR systems, there is an increasingly widespread use of this technology for several languages of the world. Similarly, research interests have grown significantly in the past few years for Arabic ASR research. It is noticed that Arabic ASR research is not only conducted and investigated by researchers in the Arab world, but also by many others located in different parts of the \vorld especially the western countries

    Al-Quran learning using mobile speech recognition:an overview

    Get PDF
    The usage of mobile application in various aspects has been worldwide accepted and there are variety of mobile applications which developed to cater the usage of different background of the user. In this paper, a short survey which includes questionnaire is distributed to find the interest of user whom using application for learning Quran and concept of mobile speech apps. The main interest of this survey is to find the acceptance of user and explanation on the proposed usage of mobile speech recognition with feature of learning apps. Factors of mobile speech recognition and mobile learning are listed to support the results from the short survey

    Speaker’s variabilities, technology and language issues that affect automatic speech and speaker recognition systems

    Get PDF
    Automatic Speech Recognition (ASR) is gammg its importance due to the vast growth generally in technology and computing in specific. From industrial perspective, computers, laptops, and mobile devices nowadays have the ASR support embedded into the operating system. From academia on the other hand, there are many research efforts being conducted addressing this technology in order to contribute to its state-of-the-art. On the other hand, speaker recognition systems are also growing due to various threats, therefore, these systems are mostly meant for security purpose

    Acoustic echo cancellation using adaptive filtering algorithms for quranic accents (Qiraat) identification

    Get PDF
    Echoed parts of Quranic accent (Qiraat) signals are exposed to reverberation of signals especially if they are listened to in a conference room or the Quranic recordings found in different media such as the web. Quranic verse rules identification/Tajweed are prone to additive noise and may reduce classification results. This research work aims to present our work towards Quranic accents (Qiraat) identification, which emphasizes on acoustic echo cancellation (AEC) of all echoed Quranic signals during the preprocessing phase of the system development. In order to conduct the AEC, three adaptive algorithms known as affine projection (AP), least mean square (LMS), and recursive least squares (RLS) are used during the preprocessing phase. Once clean Quranic signals are produced, they undergo feature extraction and pattern classification phases. The Mel Frequency Cepstral Coefficients is the most widely used technique for feature extraction and is adopted in this research work, whereas probabilities principal component analysis (PPCA), K-nearest neighbor (KNN) and gaussian mixture model (GMM) are used for pattern classification. In order to verify our methodology, audio files have been collected for Surat Ad-Duhaa for five different Quranic accents (Qiraat), namely: (1) Ad-Duri, (2) Al-Kisaie, (3) Hafs an A’asem, (4) IbnWardan, and (5) Warsh. Based on our experimental results, the AP algorithm achieved 93.9 % accuracy rate against all pattern classification techniques including PPCA, KNN, and GMM. For LMS and RLS, the achieved accuracy rates are different for PPCA, KNN, and GMM, whereby LMS with PPCA and GMM achieved the same accuracy rate of 96.9 %; however, LMS with KNN achieved 84.8 %. In addition, RLS with PPCA and GMM achieved the same accuracy rate of 90.9 %; however, RLS with KNN achieved 78.8 %. Therefore, the AP adaptive algorithm is able to reduce the echo of Quranic accents (Qiraat) signals in a consistent manner against all pattern classification techniques

    English digits speech recognition system based on Hidden Markov Models

    No full text
    This paper aims to design and implement English digits speech recognition system using Matlab (GUI). This work was based on the Hidden Markov Model (HMM), which provides a highly reliable way for recognizing speech. The system is able to recognize the speech waveform by translating the speech waveform into a set of feature vectors using Mel Frequency Cepstral Coefficients (MFCC) technique This paper focuses on all English digits from (Zero through Nine), which is based on isolated words structure. Two modules were developed, namely the isolated words speech recognition and the continuous speech recognition. Both modules were tested in both clean and noisy environments and showed a successful recognition rates. In clean environment and isolated words speech recognition module, the multi-speaker mode achieved 99.5% whereas the speaker-independent mode achieved 79.5%. In clean environment and continuous speech recognition module, the multi-speaker mode achieved 72.5% whereas the speaker-independent mode achieved 56.25%. However in noisy environment and isolated words speech recognition module, the multi-speaker mode achieved 88% whereas the speaker-independent mode achieved 67%. In noisy environment and continuous speech recognition module, the multi-speaker mode achieved 82.5% whereas the speaker-independent mode achieved 76.67%. These recognition rates are relatively successful if compared to similar systems

    Automatic person identification system using handwritten signatures

    No full text
    This paper reports the design, implementation, and evaluation of a research work for developing an automatic person identification system using hand signatures biometric. The developed automatic person identification system mainly used toolboxes provided by MATLAB environment. . In order to train and test the developed automatic person identification system, an in-house hand signatures database is created, which contains hand signatures of 100 persons (50 males and 50 females) each of which is repeated 30 times. Therefore, a total of 3000 hand signatures are collected. The collected hand signatures have gone through pre-processing steps such as producing a digitized version of the signatures using a scanner, converting input images type to a standard binary images type, cropping, normalizing images size, and reshaping in order to produce a ready-to-use hand signatures database for training and testing the automatic person identification system. Global features such as signature height, image area, pure width, and pure height are then selected to be used in the system, which reflect information about the structure of the hand signature image. For features training and classification, the Multi-Layer Perceptron (MLP) architecture of Artificial Neural Network (ANN) is used. This paper also investigates the effect of the persons’ gender on the overall performance of the system. For performance optimization, the effect of modifying values of basic parameters in ANN such as the number of hidden neurons and the number of epochs are investigated in this work. The handwritten signature data collected from male persons outperformed those collected from the female persons, whereby the system obtained average recognition rates of 76.20% and74.20% for male and female persons, respectively. Overall, the handwritten signatures based system obtained an average recognition rate of 75.20% for all persons

    Voice based automatic person identification system using vector quantization

    No full text
    This paper presents the design, implementation, and evaluation of a research work for developing an automatic person identification system using voice biometric. The developed automatic person identification system mainly used toolboxes provided by MATLAB environment. To extract features from voice signals, Mel-Frequency Cepstral Coefficients (MFCC) technique was applied producing a set of feature vectors. Subsequently, the system uses the Vector Quantization (VQ) for features training and classification. In order to train and test the developed automatic person identification system, an in-house voice database is created, which contains recordings of 100 persons’ usernames (50 males and 50 females) each of which is repeated 30 times. Therefore, a total of 3000 utterances are collected. This paper also investigates the effect of the persons’ gender on the overall performance of the system. The voice data collected from female persons outperformed those collected from the male persons, whereby the system obtained average recognition rates of 94.20% and 91.00% for female and male persons, respectively. Overall, the voice based system obtained an average recognition rate of 92.60% for all persons

    Phonetically rich and balanced arabic speech corpus: An overview

    No full text
    Lack of spoken and written training data is one o f the main issues encountered by Arabic automatic speech recognition (ASR) researchers. Almost all written and spoken corpora are not readily available to the public and many of them can only be obtained by purchasing from the Linguistic Data Consortium (LDC) or the European Language Resource Association (ELRA). There is more shortage of spoken training data as compared to written training data resulting in a great need for more speech corpora in order to serve different domains of Arabic ASR. The available spoken corpora were mainly collected from broadcast news (radios and televisions), and telephone conversations having certain technical and quality shortcomings. In order to produce a robust speaker-independent continuous automatic Arabic speech recognizer, a set of speech recordings that are rich and balanced is required. The rich characteristic is in the sense that it must contain all the phonemes of Arabic language. It must be balanced in preserving the phonetics distribution of Arabic language too. This set of speech recordings must be based on a proper written set of sentences and phrases created by experts. Therefore, it is crucial to create a high quality written (text) set of the sentences and phrases before recording them. This work adds a new kind of possible speech data for Arabic language based text and speech applications besides other kinds such as broadcast news and telephone conversations. Therefore, this work is an invitation to all Arabic ASR developers and research groups to explore and capitalize
    corecore