8,529 research outputs found

    Military applications of automatic speech recognition and future requirements

    Get PDF
    An updated summary of the state-of-the-art of automatic speech recognition and its relevance to military applications is provided. A number of potential systems for military applications are under development. These include: (1) digital narrowband communication systems; (2) automatic speech verification; (3) on-line cartographic processing unit; (4) word recognition for militarized tactical data system; and (5) voice recognition and synthesis for aircraft cockpit

    Voice data entry in air traffic control

    Get PDF
    Several of the keyboard data languages were tabulated and analyzed. The key language chosen as a test vehicle was that used by the nonradar or flight data controllers. This application was undertaken to minimize effort in a cost efficient way and with less research and development

    A Machine of Few Words -- Interactive Speaker Recognition with Reinforcement Learning

    Get PDF
    Speaker recognition is a well known and studied task in the speech processing domain. It has many applications, either for security or speaker adaptation of personal devices. In this paper, we present a new paradigm for automatic speaker recognition that we call Interactive Speaker Recognition (ISR). In this paradigm, the recognition system aims to incrementally build a representation of the speakers by requesting personalized utterances to be spoken in contrast to the standard text-dependent or text-independent schemes. To do so, we cast the speaker recognition task into a sequential decision-making problem that we solve with Reinforcement Learning. Using a standard dataset, we show that our method achieves excellent performance while using little speech signal amounts. This method could also be applied as an utterance selection mechanism for building speech synthesis systems

    Evaluation of preprocessors for neural network speaker verification

    Get PDF

    On adaptive decision rules and decision parameter adaptation for automatic speech recognition

    Get PDF
    Recent advances in automatic speech recognition are accomplished by designing a plug-in maximum a posteriori decision rule such that the forms of the acoustic and language model distributions are specified and the parameters of the assumed distributions are estimated from a collection of speech and language training corpora. Maximum-likelihood point estimation is by far the most prevailing training method. However, due to the problems of unknown speech distributions, sparse training data, high spectral and temporal variabilities in speech, and possible mismatch between training and testing conditions, a dynamic training strategy is needed. To cope with the changing speakers and speaking conditions in real operational conditions for high-performance speech recognition, such paradigms incorporate a small amount of speaker and environment specific adaptation data into the training process. Bayesian adaptive learning is an optimal way to combine prior knowledge in an existing collection of general models with a new set of condition-specific adaptation data. In this paper, the mathematical framework for Bayesian adaptation of acoustic and language model parameters is first described. Maximum a posteriori point estimation is then developed for hidden Markov models and a number of useful parameters densities commonly used in automatic speech recognition and natural language processing.published_or_final_versio

    Feature extraction and feature reduction for spoken letter recognition

    Get PDF
    The complexity of finding the relevant features for the classification of spoken letters is due to the phonetic similarities between letters and their high dimensionality. Spoken letter classification in machine learning literature has often led to very convoluted algorithms to achieve successful classification. The success in this work can be found in the high classification rate as well as the relatively small amount of computation required between signal retrieval to feature selection. The relevant features spring from an analysis of the sequential properties between the vectors produced from a Fourier transform. The study mainly focuses on the classification of fricative letters f and s, m and n, and the eset (b,c,d,e,g,p,t,v,z) which are highly indistinguishable, especially when transmitted over the modern VoIP digital devices. Another feature of this research is the dataset produced did not include signal processing that reduces noise which is shown to produce equivalent and sometimes better results. All pops and static noises that appear were kept as part of the sound files. This is in contrast to other research that recorded their dataset with high grade equipment and noise reduction algorithms. To classify the audio files, the machine learning algorithm that was used is called the random forest algorithm. This algorithm was successful because the features produced were largely separable in relatively few dimensions. Classification accuracies were in the 92\%-97\% depending on the dataset
    corecore