12 research outputs found

    The observation likelihood of silence: analysis and prospects for VAD applications

    Get PDF
    This paper shows a research on the behaviour of the observa-tion likelihoods generated by the central state of asilenceHMM(Hidden Markov Model) trained for Automatic Speech Recog-nition (ASR) using cepstral mean and variance normalization(CMVN). We have seen that observation likelihood shows astable behaviour under different recording conditions, and thischaracteristic can be used to discriminate betweenspeechandsilenceframes. We present several experiments which provethat the mere use of a decision threshold produces robust re-sults for very different recording channels and noise conditions.The results have also been compared with those obtained by twostandard VAD systems, showing promising prospects. All in all,observation likelihood scores could be useful as the basis for thedevelopment of future VAD systems, with further research andanalysis to refine the results.This work has been partially supported by the EU(FEDER) under grant TEC2015-67163-C2-1-R (RESTORE)(MINECO/FEDER, UE) and by the Basque Government undergrant KK-2017/00043 (BerbaOla

    Exploration and Optimization of Noise Reduction Algorithms for Speech Recognition in Embedded Devices

    Get PDF
    Environmental noise present in real-life applications substantially degrades the performance of speech recognition systems. An example is an in-car scenario where a speech recognition system has to support the man-machine interface. Several sources of noise coming from the engine, wipers, wheels etc., interact with speech. Special challenge is given in an open window scenario, where noise of traffic, park noise, etc., has to be regarded. The main goal of this thesis is to improve the performance of a speech recognition system based on a state-of-the-art hidden Markov model (HMM) using noise reduction methods. The performance is measured with respect to word error rate and with the method of mutual information. The noise reduction methods are based on weighting rules. Least-squares weighting rules in the frequency domain have been developed to enable a continuous development based on the existing system and also to guarantee its low complexity and footprint for applications in embedded devices. The weighting rule parameters are optimized employing a multidimensional optimization task method of Monte Carlo followed by a compass search method. Root compression and cepstral smoothing methods have also been implemented to boost the recognition performance. The additional complexity and memory requirements of the proposed system are minimum. The performance of the proposed system was compared to the European Telecommunications Standards Institute (ETSI) standardized system. The proposed system outperforms the ETSI system by up to 8.6 % relative increase in word accuracy and achieves up to 35.1 % relative increase in word accuracy compared to the existing baseline system on the ETSI Aurora 3 German task. A relative increase of up to 18 % in word accuracy over the existing baseline system is also obtained from the proposed weighting rules on large vocabulary databases. An entropy-based feature vector analysis method has also been developed to assess the quality of feature vectors. The entropy estimation is based on the histogram approach. The method has the advantage to objectively asses the feature vector quality regardless of the acoustic modeling assumption used in the speech recognition system

    On the potential of channel selection for recognition of reverberated speech with multiple microphones

    Get PDF
    The performance of ASR systems in a room environment with distant microphones is strongly affected by reverberation. As the degree of signal distortion varies among acoustic channels (i.e. microphones), the recognition accuracy can benefit from a proper channel selection. In this paper, we experimentally show that there exists a large margin for WER reduction by channel selection, and discuss several possible methods which do not require any a-priori classification. Moreover, by using a LVCSR task, a significant WER reduction is shown with a simple technique which uses a measure computed from the sub-band time envelope of the various microphone signals.Peer ReviewedPreprin

    The synergy between bounded-distance HMM and spectral subtraction for robust speech recognition

    Get PDF
    Additive noise generates important losses in automatic speech recognition systems. In this paper, we show that one of the causes contributing to these losses is the fact that conventional recognisers take into consideration feature values that are outliers. The method that we call bounded-distance HMM is a suitable method to avoid that outliers contribute to the recogniser decision. However, this method just deals with outliers, leaving the remaining features unaltered. In contrast, spectral subtraction is able to correct all the features at the expense of introducing some artifacts that, as shown in the paper, cause a larger number of outliers. As a result, we find that bounded-distance HMM and spectral subtraction complement each other well. A comprehensive experimental evaluation was conducted, considering several well-known ASR tasks (of different complexities) and numerous noise types and SNRs. The achieved results show that the suggested combination generally outperforms both the bounded-distance HMM and spectral subtraction individually. Furthermore, the obtained improvements, especially for low and medium SNRs, are larger than the sum of the improvements individually obtained by bounded-distance HMM and spectral subtraction.Publicad

    Essential Speech and Language Technology for Dutch: Results by the STEVIN-programme

    Get PDF
    Computational Linguistics; Germanic Languages; Artificial Intelligence (incl. Robotics); Computing Methodologie

    Channel selection and reverberation-robust automatic speech recognition

    Get PDF
    If speech is acquired by a close-talking microphone in a controlled and noise-free environment, current state-of-the-art recognition systems often show an acceptable error rate. The use of close-talking microphones, however, may be too restrictive in many applications. Alternatively, distant-talking microphones, often placed several meters far from the speaker, may be used. Such setup is less intrusive, since the speaker does not have to wear any microphone, but the Automatic Speech Recognition (ASR) performance is strongly affected by noise and reverberation. The thesis is focused on ASR applications in a room environment, where reverberation is the dominant source of distortion, and considers both single- and multi-microphone setups. If speech is recorded in parallel by several microphones arbitrarily located in the room, the degree of distortion may vary from one channel to another. The difference among the signal quality of each recording may be even more evident if those microphones have different characteristics: some are hanging on the walls, others standing on the table, or others build in the personal communication devices of the people present in the room. In a scenario like that, the ASR system may benefit strongly if the signal with the highest quality is used for recognition. To find such signal, what is commonly referred as Channel Selection (CS), several techniques have been proposed, which are discussed in detail in this thesis. In fact, CS aims to rank the signals according to their quality from the ASR perspective. To create such ranking, a measure that either estimates the intrinsic quality of a given signal, or how well it fits the acoustic models of the recognition system is needed. In this thesis we provide an overview of the CS measures presented in the literature so far, and compare them experimentally. Several new techniques are introduced, that surpass the former techniques in terms of recognition accuracy and/or computational efficiency. A combination of different CS measures is also proposed to further increase the recognition accuracy, or to reduce the computational load without any significant performance loss. Besides, we show that CS may be used together with other robust ASR techniques, and that the recognition improvements are cumulative up to some extent. An online real-time version of the channel selection method based on the variance of the speech sub-band envelopes, which was developed in this thesis, was designed and implemented in a smart room environment. When evaluated in experiments with real distant-talking microphone recordings and with moving speakers, a significant recognition performance improvement was observed. Another contribution of this thesis, that does not require multiple microphones, was developed in cooperation with the colleagues from the chair of Multimedia Communications and Signal Processing at the University of Erlangen-Nuremberg, Erlangen, Germany. It deals with the problem of feature extraction within REMOS (REverberation MOdeling for Speech recognition), which is a generic framework for robust distant-talking speech recognition. In this framework, the use of conventional methods to obtain decorrelated feature vector coefficients, like the discrete cosine transform, is constrained by the inner optimization problem of REMOS, which may become unsolvable in a reasonable time. A new feature extraction method based on frequency filtering was proposed to avoid this problem.Los actuales sistemas de reconocimiento del habla muestran a menudo una tasa de error aceptable si la voz es registrada por micr ofonos próximos a la boca del hablante, en un entorno controlado y libre de ruido. Sin embargo, el uso de estos micr ofonos puede ser demasiado restrictivo en muchas aplicaciones. Alternativamente, se pueden emplear micr ofonos distantes, los cuales a menudo se ubican a varios metros del hablante. Esta con guraci on es menos intrusiva ya que el hablante no tiene que llevar encima ning un micr ofono, pero el rendimiento del reconocimiento autom atico del habla (ASR, del ingl es Automatic Speech Recognition) en dicho caso se ve fuertemente afectado por el ruido y la reverberaci on. Esta tesis se enfoca a aplicaciones ASR en el entorno de una sala, donde la reverberaci on es la causa predominante de distorsi on y se considera tanto el caso de un solo micr ofono como el de m ultiples micr ofonos. Si el habla es grabada en paralelo por varios micr ofonos distribuidos arbitrariamente en la sala, el grado de distorsi on puede variar de un canal a otro. Las diferencias de calidad entre las señales grabadas pueden ser m as acentuadas si dichos micr ofonos muestran diferentes características y colocaciones: unos en las paredes, otros sobre la mesa, u otros integrados en los dispositivos de comunicaci on de las personas presentes en la sala. En dicho escenario el sistema ASR se puede bene ciar enormemente de la utilizaci on de la señal con mayor calidad para el reconocimiento. Para hallar dicha señal se han propuesto diversas t ecnicas, denominadas CS (del ingl es Channel Selection), las cuales se discuten detalladament en esta tesis. De hecho, la selecci on de canal busca ranquear las señales conforme a su calidad desde la perspectiva ASR. Para crear tal ranquin se necesita una medida que tanto estime la calidad intr nseca de una selal, como lo bien que esta se ajusta a los modelos ac usticos del sistema de reconocimiento. En esta tesis proporcionamos un resumen de las medidas CS hasta ahora presentadas en la literatura, compar andolas experimentalmente. Diversas nuevas t ecnicas son presentadas que superan las t ecnicas iniciales en cuanto a exactitud de reconocimiento y/o e ciencia computacional. Tambi en se propone una combinaci on de diferentes medidas CS para incrementar la exactitud de reconocimiento, o para reducir la carga computacional sin ninguna p erdida signi cativa de rendimiento. Adem as mostramos que la CS puede ser empleada junto con otras t ecnicas robustas de ASR, tales como matched condition training o la normalizaci on de la varianza y la media, y que las mejoras de reconocimiento de ambas aproximaciones son hasta cierto punto acumulativas. Una versi on online en tiempo real del m etodo de selecci on de canal basado en la varianza del speech sub-band envelopes, que fue desarrolladas en esta tesis, fue diseñada e implementada en una sala inteligente. Reportamos una mejora signi cativa en el rendimiento del reconocimiento al evaluar experimentalmente grabaciones reales de micr ofonos no pr oximos a la boca con hablantes en movimiento. La otra contribuci on de esta tesis, que no requiere m ultiples micr ofonos, fue desarrollada en colaboraci on con los colegas del departamento de Comunicaciones Multimedia y Procesamiento de Señales de la Universidad de Erlangen-Nuremberg, Erlangen, Alemania. Trata sobre el problema de extracci on de caracter sticas en REMOS (del ingl es REverberation MOdeling for Speech recognition). REMOS es un marco conceptual gen erico para el reconocimiento robusto del habla con micr ofonos lejanos. El uso de los m etodos convencionales para obtener los elementos decorrelados del vector de caracter sticas, como la transformada coseno discreta, est a limitado por el problema de optimizaci on inherente a REMOS, lo que har a que, utilizando las herramientas convencionales, se volviese un problema irresoluble en un tiempo razonable. Para resolver este problema hemos desarrollado un nuevo m etodo de extracci on de caracter sticas basado en fi ltrado frecuencialEls sistemes actuals de reconeixement de la parla mostren sovint una taxa d'error acceptable si la veu es registrada amb micr ofons pr oxims a la boca del parlant, en un entorn controlat i lliure de soroll. No obstant, l' us d'aquests micr ofons pot ser massa restrictiu en moltes aplicacions. Alternativament, es poden utilitzar micr ofons distants, els quals sovint s on ubicats a diversos metres del parlant. Aquesta con guraci o es menys intrusiva, ja que el parlant no ha de portar a sobre cap micr ofon, per o el rendiment del reconeixement autom atic de la parla (ASR, de l'angl es Automatic Speech Recognition) en aquest cas es veu fortament afectat pel soroll i la reverberaci o. Aquesta tesi s'enfoca a aplicacions ASR en un ambient de sala, on la reverberaci o es la causa predominant de distorsi o i es considera tant el cas d'un sol micr ofon com el de m ultiples micr ofons. Si la parla es gravada en paral lel per diversos micr ofons distribuï ts arbitràriament a la sala, el grau de distorsi o pot variar d'un canal a l'altre. Les difer encies en qualitat entre els senyals enregistrats poden ser m es accentuades si els micr ofons tenen diferents caracter stiques i col locacions: uns a les parets, altres sobre la taula, o b e altres integrats en els aparells de comunicaci o de les persones presents a la sala. En un escenari com aquest, el sistema ASR es pot bene ciar enormement de l'utilitzaci o del senyal de m es qualitat per al reconeixement. Per a trobar aquest senyal s'han proposat diverses t ecniques, anomenades CS (de l'angl es Channel Selection), les quals es discuteixen detalladament en aquesta tesi. De fet, la selecci o de canal busca ordenar els senyals conforme a la seva qualitat des de la perspectiva ASR. Per crear tal r anquing es necessita una mesura que estimi la qualitat intr nseca d'un senyal, o b e una que valori com de b e aquest s'ajusta als models ac ustics del sistema de reconeixement. En aquesta tesi proporcionem un resum de les mesures CS ns ara presentades en la literatura, comparant-les experimentalment. A m es, es presenten diverses noves t ecniques que superen les anteriors en termes d'exactitud de reconeixement i / o e ci encia computacional. Tamb e es proposa una combinaci o de diferents mesures CS amb l'objectiu d'incrementar l'exactitud del reconeixement, o per reduir la c arrega computacional sense cap p erdua signi cativa de rendiment. A m es mostrem que la CS pot ser utilitzada juntament amb altres t ecniques robustes d'ASR, com ara matched condition training o la normalitzaci o de la varian ca i la mitjana, i que les millores de reconeixement de les dues aproximacions s on ns a cert punt acumulatives. Una versi o online en temps real del m etode de selecci o de canal basat en la varian ca de les envolvents sub-banda de la parla, desenvolupada en aquesta tesi, va ser dissenyada i implementada en una sala intel ligent. A l'hora d'avaluar experimentalment gravacions reals de micr ofons no pr oxims a la boca amb parlants en moviment, es va observar una millora signi cativa en el rendiment del reconeixement. L'altra contribuci o d'aquesta tesi, que no requereix m ultiples micr ofons, va ser desenvolupada en col laboraci o amb els col legues del departament de Comunicacions Multimedia i Processament de Senyals de la Universitat de Erlangen-Nuremberg, Erlangen, Alemanya. Tracta sobre el problema d'extracci o de caracter stiques a REMOS (de l'angl es REverberation MOdeling for Speech recognition). REMOS es un marc conceptual gen eric per al reconeixement robust de la parla amb micr ofons llunyans. L' us dels m etodes convencionals per obtenir els elements decorrelats del vector de caracter stiques, com ara la transformada cosinus discreta, est a limitat pel problema d'optimitzaci o inherent a REMOS. Aquest faria que, utilitzant les eines convencionals, es torn es un problema irresoluble en un temps raonable. Per resoldre aquest problema hem desenvolupat un nou m etode d'extracci o de caracter ístiques basat en fi ltrat frecuencial

    Suomenkielinen puheentunnistus hammashuollon sovelluksissa

    Get PDF
    A significant portion of the work time of dentists and nursing staff goes to writing reports and notes. This thesis studies how automatic speech recognition could ease the work load. The primary objective was to develop and evaluate an automatic speech recognition system for dental health care that records the status of patient's dentition, as dictated by a dentist. The system accepts a restricted set of spoken commands that identify a tooth or teeth and describe their condition. The status of the teeth is stored in a database. In addition to dentition status dictation, it was surveyed how well automatic speech recognition would be suited for dictating patient treatment reports. Instead of typing reports with a keyboard, a dentist could dictate them to speech recognition software that automatically transcribes them into text. The vocabulary and grammar in such a system is, in principle, unlimited. This makes it significantly harder to obtain an accurate transcription. The status commands and the report dictation language model are Finnish. Aalto University has developed an unlimited vocabulary speech recognizer that is particularly well suited for Finnish free speech recognition, but it has previously been used mainly for research purposes. In this project we experimented with adapting the recognizer to grammar-based dictation, and real end user environments. Nearly perfect recognition accuracy was obtained for dentition status dictation. Letter error rates for the report transcription task varied between 1.3 % and 17 % depending on the speaker, with no obvious explanation for so radical inter-speaker variability. Language model for report transcription was estimated using a collection of dental reports. Including a corpus of literary Finnish did not improve the results.HammaslÀÀkÀrien ja hoitohenkilökunnan työajasta huomattava osa kuluu raportointiin ja muistiinpanojen tekemiseen. TÀmÀ lisensiaatintyö tutkii kuinka automaattinen puheentunnistus voisi helpottaa tÀtÀ työtaakkaa. Ensisijaisena tavoitteena oli kehittÀÀ automaattinen puheentunnistusjÀrjestelmÀ hammashuollon tarpeisiin, joka tallentaa potilaan hampaiston tilan hammaslÀÀkÀrin sanelemana, ja arvioida jÀrjestelmÀn toimivuutta. JÀrjestelmÀ hyvÀksyy rajoitetun joukon puhuttuja komentoja, jotka identifioivat hampaan tai hampaat ja kuvaavat niiden tilaa. Hampaiden tila tallennetaan tietokantaan. Hampaiston tilan sanelun lisÀksi tutkittiin kuinka hyvin automaattinen puheentunnistus sopisi potilaiden hoitokertomusten saneluun. NÀppÀimistöllÀ kirjoittamisen sijaan hammaslÀÀkÀri voisi sanella hoitokertomukset puheentunnistusohjelmistolle, joka automaattisesti purkaisi puheen tekstimuotoon. TÀmÀn kaltaisessa jÀrjestelmÀssÀ sanasto ja kielioppi ovat periaatteessa rajoittamattomat, minkÀ takia tekstiÀ on huomattavasti vaikeampaa tunnistaa tarkasti. Status-komennot ja hoitokertomusten kielimalli ovat suomenkielisiÀ. Aalto-yliopisto on kehittÀnyt rajoittamattoman sanaston puheentunnistimen, joka soveltuu erityisen hyvin suomenkielisen vapaamuotoisen puheen tunnistamiseen, mutta sitÀ on aikaisemmin kÀytetty lÀhinnÀ tutkimustarkoituksiin. TÀssÀ projektissa tutkimme tunnistimen sovittamista kielioppipohjaiseen tunnistukseen ja todellisiin kÀyttöympÀristöihin. Hampaiston tilan sanelussa saavutettiin lÀhes tÀydellinen tunnistustarkkuus. Kirjainvirheiden osuus hoitokertomusten sanelussa vaihteli 1,3 ja 17 prosentin vÀlillÀ puhujasta riippuen, ilman selvÀÀ syytÀ nÀin jyrkÀlle puhujien vÀliselle vaihtelulle. Kielimalli hoitokertomusten sanelulle laskettiin kokoelmasta hammaslÀÀkÀrien kirjoittamia raportteja. Kirjakielisen aineiston sisÀllyttÀminen ei parantanut tunnistustulosta

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    XXVI Fonetiikan pÀivÀt 2010

    Get PDF
    corecore