6 research outputs found

    Amélioration psychoacoustique du filtrage de Wiener : quelques approches récentes et une nouvelle méthode

    Get PDF
    *Bruit musical, distorsion, filtre deWiener, psychoacoustique, signal de parol

    Speech enhancement para reconhecimento de fala

    Get PDF
    Mestrado em Engenharia Electrónica e TelecomunicaçõesA utilização de reconhecedores de fala, em ambientes industriais e domésticos é, cada vez mais, uma constante. A presença de ruído é um dos factores com que nos debatemos, pois condiciona bastante o seu desempenho. Com a realização desta dissertação, pretende-se aplicar metodologias de Speech Enhancement baseadas em SVD, capazes de melhorar esta condicionante. Os sinais de teste são pré-processados com o bloco de Speech Enhancement, antes de serem processados pelos reconhecedores previamente treinados. Criaram-se reconhecedores de fala, dependentes do orador, para dois cenários de utilização distintos, controlo de cadeira de rodas e controlo de sala de cinema em casa. Nos resultados apresentados, o desempenho dos classificadores foi avaliado em condições diferentes, como adição de ruído e aplicação do bloco de Speech Enhancement, comparando-se percentagens de reconhecimento, que representam o número de palavras reconhecidas das tarefas a executar.The use of speech recognizers in industrial and domestic environments has significantly grown in the last years. One of the issues that we face is the presence of noise, which severely degrades performance. The main goal of this work is to develop methodologies for Speech Enhancement based on SVD, capable of addressing this issue. Our test signals are pre-processed with a Speech Enhancement block, before being received by the previously trained recognizers. We have created two user-specific speech recognizers, for two distinct scenarios: control of a wheelchair and a cinema at home. In the results presented, we have evaluated the performance of the classifiers under different conditions, such as addition of noise and application of a Speech Enhancement block, by comparing the rates of recognition, which represent the number of recognized words for a specific task to be performed

    Algorithm and architecture for simultaneous diagonalization of matrices applied to subspace-based speech enhancement

    Get PDF
    This thesis presents algorithm and architecture for simultaneous diagonalization of matrices. As an example, a subspace-based speech enhancement problem is considered, where in the covariance matrices of the speech and noise are diagonalized simultaneously. In order to compare the system performance of the proposed algorithm, objective measurements of speech enhancement is shown in terms of the signal to noise ratio and mean bark spectral distortion at various noise levels. In addition, an innovative subband analysis technique for subspace-based time-domain constrained speech enhancement technique is proposed. The proposed technique analyses the signal in its subbands to build accurate estimates of the covariance matrices of speech and noise, exploiting the inherent low varying characteristics of speech and noise signals in narrow bands. The subband approach also decreases the computation time by reducing the order of the matrices to be simultaneously diagonalized. Simulation results indicate that the proposed technique performs well under extreme low signal-to-noise-ratio conditions. Further, an architecture is proposed to implement the simultaneous diagonalization scheme. The architecture is implemented on an FPGA primarily to compare the performance measures on hardware and the feasibility of the speech enhancement algorithm in terms of resource utilization, throughput, etc. A Xilinx FPGA is targeted for implementation. FPGA resource utilization re-enforces on the practicability of the design. Also a projection of the design feasibility for an ASIC implementation in terms of transistor count only is include

    Robust speaker recognition in presence of non-trivial environmental noise (toward greater biometric security)

    Get PDF
    The aim of this thesis is to investigate speaker recognition in the presence of environmental noise, and to develop a robust speaker recognition method. Recently, Speaker Recognition has been the object of considerable research due to its wide use in various areas. Despite major developments in this field, there are still many limitations and challenges. Environmental noises and their variations are high up in the list of challenges since it impossible to provide a noise free environment. A novel approach is proposed to address the issue of performance degradation in environmental noise. This approach is based on the estimation of signal-to-noise ratio (SNR) and detection of ambient noise from the recognition signal to re-train the reference model for the claimed speaker and to generate a new adapted noisy model to decrease the noise mismatch with recognition utterances. This approach is termed “Training on the fly” for robustness of speaker recognition under noisy environments. To detect the noise in the recognition signal two different techniques are proposed: the first technique including generating an emulated noise depending on estimated power spectrum of the original noise using 1/3 octave band filter bank and white noise signal. This emulated noise become close enough to original one that includes in the input signal (recognition signal). The second technique deals with extracting the noise from the input signal using one of speech enhancement algorithm with spectral subtraction to find the noise in the signal. Training on the fly approach (using both techniques) has been examined using two feature approaches and two different kinds of artificial clean and noisy speech databases collected in different environments. Furthermore, the speech samples were text independent. The training on the fly approach is a significant improvement in performance when compared with the performance of conventional speaker recognition (based on clean reference models). Moreover, the training on the fly based on noise extraction showed the best results for all types of noisy data
    corecore