20 research outputs found

    Report from Dagstuhl Seminar 13451 Computational Audio Analysis Edited by

    No full text
    Compared to traditional speech, music, or sound processing, the computational analysis of general audio data has a relatively young research history. In particular, the extraction of affective information (i. e., information that does not deal with the ‘immediate ’ nature of the content such as the spoken words or note events) from audio signals has become an important research strand with a huge increase of interest in academia and industry. At an early stage of this novel research direction, many analysis techniques and representations were simply transferred from the speech domain to other audio domains. However, general audio signals (including their affective aspects) typically possess acoustic and structural characteristics that distinguish them from spoken language or isolated ‘controlled ’ music or sound events. In the Dagstuhl Seminar 13451 titled “Computational Audio Analysis ” we discussed the development of novel machine learning as well as signal processing techniques that are applicable for a wide range of audio signals and analysis tasks. In particular, we looked at a variety of sounds besides speech such as music recordings, animal sounds, environmental sounds, and mixtures thereof. In this report, we give an overview of the various contributions and results of the seminar. We start with a
    corecore