8 research outputs found

    Deep Learning for Audio Signal Processing

    Full text link
    Given the recent surge in developments of deep learning, this article provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross-fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e. audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.Comment: 15 pages, 2 pdf figure

    Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections

    Full text link
    We present a new algorithm to generate minimal, stable, and symbolic corrections to an input that will cause a neural network with ReLU activations to change its output. We argue that such a correction is a useful way to provide feedback to a user when the network's output is different from a desired output. Our algorithm generates such a correction by solving a series of linear constraint satisfaction problems. The technique is evaluated on three neural network models: one predicting whether an applicant will pay a mortgage, one predicting whether a first-order theorem can be proved efficiently by a solver using certain heuristics, and the final one judging whether a drawing is an accurate rendition of a canonical drawing of a cat.Comment: 24 page

    Fuzzy Transfer Learning Using an Infinite Gaussian Mixture Model and Active Learning

    Full text link
    © 2018 IEEE. Transfer learning is gaining considerable attention due to its ability to leverage previously acquired knowledge to assist in completing a prediction task in a related domain. Fuzzy transfer learning, which is based on fuzzy system (especially fuzzy rule-based models), has been developed because of its capability to deal with the uncertainty in transfer learning. However, two issues with fuzzy transfer learning have not yet been resolved: choosing an appropriate source domain and efficiently selecting labeled data for the target domain. This paper proposes an innovative method based on fuzzy rules that combines an infinite Gaussian mixture model (IGMM) with active learning to enhance the performance and generalizability of the constructed model. An IGMM is used to identify the data structures in the source and target domains providing a promising solution to the domain selection dilemma. Further, we exploit the interactive query strategy in active learning to correct imbalances in the knowledge to improve the generalizability of fuzzy learning models. Through experiments on synthetic datasets, we demonstrate the rationality of employing an IGMM and the effectiveness of applying an active learning technique. Additional experiments on real-world datasets further support the capabilities of the proposed method in practical situations

    Improving interpretability and regularization in deep learning

    Get PDF
    Deep learning approaches yield state-of-the-art performance in a range of tasks, including automatic speech recognition. However, the highly distributed representation in a deep neural network (DNN) or other network variations is difficult to analyze, making further parameter interpretation and regularization challenging. This paper presents a regularization scheme acting on the activation function output to improve the network interpretability and regularization. The proposed approach, referred to as activation regularization, encourages activation function outputs to satisfy a target pattern. By defining appropriate target patterns, different learning concepts can be imposed on the network. This method can aid network interpretability and also has the potential to reduce overfitting. The scheme is evaluated on several continuous speech recognition tasks: the Wall Street Journal continuous speech recognition task, eight conversational telephone speech tasks from the IARPA Babel program and a U.S. English broadcast news task. On all the tasks, the activation regularization achieved consistent performance gains over the standard DNN baselines
    corecore