1,129 research outputs found

    Deep Learning for Audio Signal Processing

    Full text link
    Given the recent surge in developments of deep learning, this article provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross-fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e. audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.Comment: 15 pages, 2 pdf figure

    Conditional Dilated Attention Tracking Model - C-DATM

    Get PDF
    Current commercial tracking systems do not process images fast enough to perform target-tracking in real- time. State-of-the-art methods use entire scenes to locate objects frame-by-frame and are commonly computationally expensive because they use image convolutions. Alternatively, attention mechanisms track more efficiently by mimicking human optical cognitive interaction to only process small portions of an image. Thus, in this work we use an attention-based approach to create a model called C-DATM (Conditional Dilated Attention tracking Model) that learns to compare target features in a sequence of image-frames using dilated convolutions. The C-DATM is tested using the Modified National Institute of Standards and Technology handwritten digits. We also compare the results achieved by C-DATM to the results achieved by other attention-based networks like Deep Recurrent Attentive Writer and Recurrent Attention Tracking Model that appear in the literature. C-DATM builds on previous attention principles to achieve generic, efficient, and recurrent-less object tracking. The GOTURN(General Object Tracking using Regression Networks) model which won the VOT 2014 dataset challenge contains similar operating principles to C-DATM and is used as an exemplar to explore the advantages and disadvantages C-DATM. The results of this comparison demonstrate that C-DATM has a number of significant advantage over GOTURN including faster processing of image sequences and the ability to generalize to tracking new targets without retraining the system
    corecore