10 research outputs found

    Noise Estimation and Noise Removal Techniques for Speech Recognition in Adverse Environment

    Full text link

    Speech Enhancement by Noise Cancellation: A Review

    Get PDF
    It is observed that recognition rate of speech decreases, with the increase of noise in the background. Noise in the background as a tendency to decay the system’s robustness. This paper gives a brief survey on speech enhancements methods using various noise cancellation techniques for different SNR’s in a noisy environment

    Digital filtering optimization on noisy human speech in assistive listening system

    Get PDF
    This paper represents the acoustic noise cancellation system by adaptive filter algorithms. The adaptive algorithms are least mean square (LMS), Normalized least mean square (NLMS) and recursive least square (RLS). This paper investigates the execution of LMS, NLMS and RLS calculations for acoustic noise by running the model continuously for sound signs and signal processing. The fundamental is on the utilization of NLMS and RLS calculations to lessen undesirable noise or commotion hence increasing desired sound signal quality. MATLAB Simulink method is being utilized for simulation. Adaptive filter is usually utilized for the undoing of the noise part which is blended with the wanted sound sign. LMS is generally utilized because of its effortlessness and robustness, however it neglects to finish merging criteria so here LMS is improvised by NLMS which is a sort of LMS algorithm and we additionally tried for the RLS which shows significant improvement. RLS shows better exhibitions and it has speedier meeting speed/rate than LMS and NLMS calculations with better strength to alterable environment and better following ability and almost can get clear speech signal

    Deep Learning Based Sound Event Detection and Classification

    Get PDF
    Hearing sense has an important role in our daily lives. During the recent years, there has been many studies to transfer this capability to the computers. In this dissertation, we design and implement deep learning based algorithms to improve the ability of the computers in recognizing the different sound events. In the first topic, we investigate sound event detection, which identifies the time boundaries of the sound events in addition to the type of the events. For sound event detection, we propose a new method, AudioMask, to benefit from the object-detection techniques in computer vision. In this method, we convert the question of identifying time boundaries for sound events, into the problem of identifying objects in images by treating the spectrograms of the sound as images. AudioMask first applies Mask R-CNN, an algorithm for detecting objects in images, to the log-scaled mel-spectrograms of the sound files. Then we use a frame-based sound event classifier trained independently from Mask R-CNN, to analyze each individual frame in the candidate segments. Our experiments show that, this approach has promising results and can successfully identify the exact time boundaries of the sound events. The code for this study is available at https://github.com/alireza-nasiri/AudioMask. In the second topic, we present SoundCLR, a supervised contrastive learning based method for effective environmental sound classification with state-of-the-art performance, which works by learning representations that disentangle the samples of each class from those of other classes. We also exploit transfer learning and strong data augmentation to improve the results. Our extensive benchmark experiments show that our hybrid deep network models trained with combined contrastive and cross-entropy loss achieved the state-of-the-art performance on three benchmark datasets ESC-10, ESC-50, and US8K with validation accuracies of 99.75%, 93.4%, and 86.49% respectively. The ensemble version of our models also outperforms other top ensemble methods. Finally, we analyze the acoustic emissions that are generated during the degradation process of SiC composites. The aim here is to identify the state of the degradation in the material, by classifying its emitted acoustic signals. As our baseline, we use random forest method on expert-defined features. Also we propose a deep neural network of convolutional layers to identify the patterns in the raw sound signals. Our experiments show that both of our methods are reliably capable of identifying the degradation state of the composite, and in average, the convolutional model significantly outperforms the random forest technique
    corecore