7 research outputs found

    English digits speech recognition system based on hidden Markov Models

    Get PDF
    The field of Automatic Speech Recognition (ASR) is about 60 years old. There have been many interesting advances and developments since the invention of the first speech recognizer at Bell Labs in the early 1950' s. The development of ASR increased gradually until the invention of Hidden Markov Models (HMM) in early 1970's. Researchers' contribution were to make use of ASR technology to what can be seen nowadays of various advancements in fields like multi-modal, multi-linguaVcross-lingual ASR using statistical techniques such as HMM, SVM, neural network, etc [1]

    Speech recognition system using MATLAB : design, implementation, and samples codes

    Get PDF
    Research in automatic speech recognition has been done for almost four decades. Over the past decades, the development of speech recognition applications gives invaluable contributions. Speech has the potential to be a better interface than other computing devices used such as keyboard or mouse. This project aims to develop automated English digits speech recognition system. The project relies heavily on the well known and widely used statistical method in characterizing the speech pattern, the Hidden Markov Model (HMM), which provides a highly reliable way for recognizing speech. This project discusses the theory of HMM and then extends the ideas to the development and implementation by applying this method in computational speech recognition. Basically, the system is able to recognize the spoken utterances by translating the speech waveform into a set of feature vectors using Mel Frequency Cepstral Coefficients (MFCC) technique, which then estimates the observation likelihood by using the Forward algorithm. The HMM parameters are estimated by applying the Baum Welch algorithm on previously trained samples. The most likely sequence is then decoded using Viterbi algorithm, thus producing the recognized word. This project focuses on all English digits from (Zero through Nine), which is based on isolated words structure. Two modules were developed, namely the isolated words speech recognition and the continuous speech recognition. Both modules were tested in both clean and noisy environments and showed relatively successful recognition rates. In clean environment and isolated words speech recognition module, the multi-speaker mode achieved 99.5% whereas the speaker-independent mode achieved 79.5%. In clean environment and continuous speech recognition module, the multi-speaker mode achieved 70% whereas the speaker-independent mode achieved 55%. However in noisy environment and isolated words speech recognition module, the multi-speaker mode achieved 88% whereas the speaker-independent mode achieved 67%. In noisy environment and continuous speech recognition module, the multi-speaker mode achieved 92.5% whereas the speaker-independent mode achieved 75%. These recognition rates are relatively successful if compared to similar systems

    Signature recognition using artificial neural network

    Get PDF
    Nowadays, there are many applications required the user to confirm his identity. It might be done by asking a secret question that the user will answer to get into that application, and it might be a password or a pin code, face, eye, fingerprint or signature. Automatic signature verification is an active field of research with many practical applications. Automatic handwritten signature verification is divided into two approaches: off-line and on-line. In the off-line signature verification approach, the data of the signature is obtained from a static image utilizing a scanning device [I). For our application, off-line approach will be utilized.Neural Networks (NN) also known as Artificial Neural Networks (ANN) belong to the artificial intelligence approaches, which attempt to mechanize the recognition procedure according to the way a person applies intelligence in visualizing and analyzing[2]. Neural Networks' structure is inspired by biological models of the nervous system proposed as a model of the human brain's activities aiming to mimic certain processing capabilities of the human brain

    English digits speech recognition system based on Hidden Markov Models

    No full text
    This paper aims to design and implement English digits speech recognition system using Matlab (GUI). This work was based on the Hidden Markov Model (HMM), which provides a highly reliable way for recognizing speech. The system is able to recognize the speech waveform by translating the speech waveform into a set of feature vectors using Mel Frequency Cepstral Coefficients (MFCC) technique This paper focuses on all English digits from (Zero through Nine), which is based on isolated words structure. Two modules were developed, namely the isolated words speech recognition and the continuous speech recognition. Both modules were tested in both clean and noisy environments and showed a successful recognition rates. In clean environment and isolated words speech recognition module, the multi-speaker mode achieved 99.5% whereas the speaker-independent mode achieved 79.5%. In clean environment and continuous speech recognition module, the multi-speaker mode achieved 72.5% whereas the speaker-independent mode achieved 56.25%. However in noisy environment and isolated words speech recognition module, the multi-speaker mode achieved 88% whereas the speaker-independent mode achieved 67%. In noisy environment and continuous speech recognition module, the multi-speaker mode achieved 82.5% whereas the speaker-independent mode achieved 76.67%. These recognition rates are relatively successful if compared to similar systems

    Automatic person identification system using handwritten signatures

    No full text
    This paper reports the design, implementation, and evaluation of a research work for developing an automatic person identification system using hand signatures biometric. The developed automatic person identification system mainly used toolboxes provided by MATLAB environment. . In order to train and test the developed automatic person identification system, an in-house hand signatures database is created, which contains hand signatures of 100 persons (50 males and 50 females) each of which is repeated 30 times. Therefore, a total of 3000 hand signatures are collected. The collected hand signatures have gone through pre-processing steps such as producing a digitized version of the signatures using a scanner, converting input images type to a standard binary images type, cropping, normalizing images size, and reshaping in order to produce a ready-to-use hand signatures database for training and testing the automatic person identification system. Global features such as signature height, image area, pure width, and pure height are then selected to be used in the system, which reflect information about the structure of the hand signature image. For features training and classification, the Multi-Layer Perceptron (MLP) architecture of Artificial Neural Network (ANN) is used. This paper also investigates the effect of the personsโ€™ gender on the overall performance of the system. For performance optimization, the effect of modifying values of basic parameters in ANN such as the number of hidden neurons and the number of epochs are investigated in this work. The handwritten signature data collected from male persons outperformed those collected from the female persons, whereby the system obtained average recognition rates of 76.20% and74.20% for male and female persons, respectively. Overall, the handwritten signatures based system obtained an average recognition rate of 75.20% for all persons

    Voice based automatic person identification system using vector quantization

    No full text
    This paper presents the design, implementation, and evaluation of a research work for developing an automatic person identification system using voice biometric. The developed automatic person identification system mainly used toolboxes provided by MATLAB environment. To extract features from voice signals, Mel-Frequency Cepstral Coefficients (MFCC) technique was applied producing a set of feature vectors. Subsequently, the system uses the Vector Quantization (VQ) for features training and classification. In order to train and test the developed automatic person identification system, an in-house voice database is created, which contains recordings of 100 personsโ€™ usernames (50 males and 50 females) each of which is repeated 30 times. Therefore, a total of 3000 utterances are collected. This paper also investigates the effect of the personsโ€™ gender on the overall performance of the system. The voice data collected from female persons outperformed those collected from the male persons, whereby the system obtained average recognition rates of 94.20% and 91.00% for female and male persons, respectively. Overall, the voice based system obtained an average recognition rate of 92.60% for all persons

    Fusion of speech and handwritten signatures biometrics for person identification

    No full text
    Automatic person identification (API) using human biometrics is essential and highly demanded compared to traditional API methods, where a person is automatically identified using his/her distinct characteristics including speech, fingerprint, iris, handwritten signatures, and others. The fusion of more than one human biometric produces bimodal and multimodal API systems that normally outperform single modality systems. This paper presents our work towards fusing speech and handwritten signatures for developing a bimodal API system, where fusion was conducted at the decision level due to the differences in the type and format of the features extracted. A data set is created that contains recordings of usernames and handwritten signatures of 100 persons (50 males and 50 females), where each person recorded his/her username 30 times and provided his/her handwritten signature 30 times. Consequently, a total of 3000 utterances and 3000 handwritten signatures were collected. The speech API used Mel-Frequency Cepstral Coefficients (MFCC) technique for features extraction and Vector Quantization (VQ) for features training and classification. On the other hand, the handwritten signatures API used global features for reflecting the structure of the hand signature image such as image area, pure height, pure width and signature height and the Multi-Layer Perceptron (MLP) architecture of Artificial Neural Network for features training and classification. Once the best matches for both the speech and the handwritten signatures API are produced, the fusion process takes place at decision level. It computes the difference between the two best matches for each modality and selects the modality of the maximum difference. Based on our experimental results, the bimodal API obtained an average recognition rate of 96.40%, whereas the speech API and the handwritten signatures API obtained average recognition rates of 92.60% and 75.20%, respectively. Therefore, the bimodal API system is able to outperform other single modality API systems
    corecore