9 research outputs found

    Deep audio effects for snare drum recording transformations

    Get PDF
    The ability to perceptually modify drum recording parameters in a post-recording process would be of great benefit to engineers limited by time or equipment. In this work, a datadriven approach to post-recording modification of the dampening and microphone positioning parameters commonly associated with snare drum capture is proposed. The system consists of a deep encoder that analyzes audio input and predicts optimal parameters of one or more third-party audio effects, which are then used to process the audio and produce the desired transformed output audio. Furthermore, two novel audio effects are specifically developed to take advantage of the multiple parameter learning abilities of the system. Perceptual quality of transformations is assessed through a subjective listening test, and an object evaluation is used to measure system performance. Results demonstrate a capacity to emulate snare dampening; however, attempts were not successful for emulating microphone position changes

    SAFE: A System for the Extraction and Retrieval of Semantic Audio Descriptors

    Get PDF
    We present an overview of the Semantic Audio Feature Ex-traction (SAFE) Project, a novel data collection architec-ture for the extraction and retrieval of semantic descrip-tions of musical timbre, deployed within the digital au-dio workstation. By embedding the data capture system into the music production workflow, we are able to max-imise the return of semantically annotated music produc-tion data, whilst mitigating against issues such as musical and environmental bias. Users of the plug-ins are able to submit semantic descriptions of their own music, whilst utilising the continually growing collaborative dataset of musical descriptors. In order to provide more contextually representative timbral transformations, the dataset is parti-tioned using metadata, captured within the application. 1

    Adaptive metronome: a MIDI plug-in for modelling cooperative timing in music ensembles

    Get PDF
    We present a plug-in for music production software (i.e., digital audio workstations) that simulates musicians synchronizing to other musicians, either virtual or controlled by users. Notes of the parts controlled by users are played according to MIDI input (e.g., a drum pad). Notes associated with virtual musicians are played according to a linear phase correction model, where the time of the next note of each part is produced in weighted proportion to the asynchrony of the previous note and the notes of each of the other parts. Each virtual musician’s performance is controlled by: two noise parameters defining the variability of central timer and motor implementation processes (Wing and Kristofferson 1973); a delay parameter, defining the variability in lag to play a note; and a set of alpha parameters, defining the correction to the asynchrony to other players (both human and machine). These parameters can differ between musicians and can be adjusted in real-time. The number of musicians can be configured allowing studies involving any mixture of virtual and human players. The plugin has been tested with the homophonic part of a Haydn piece with three virtual musicians and one user. Event times are logged to study ensemble synchronisation. The plug-in will be used as part of an interactive augmented reality ensemble (https://arme-project.ac.uk). Wing, A.M., Endo, S., Bradbury, A. and Vorberg, D., 2014. Optimal feedback correction in string quartet synchronization. Journal of The Royal Society Interface, 11(93), p.20131125

    Real-time Excitation Based Binaural Loudness Meters

    No full text
    The measurement of perceived loudness is a difficult yet im-portant task with a multitude of applications such as loudness align-ment of complex stimuli and loudness restoration for the hear-ing impaired. Although computational hearing models exist, few are able to accurately predict the binaural loudness of everyday sounds. Such models demand excessive processing power making real-time loudness metering problematic. In this work, the dy-namic auditory loudness models of Glasberg and Moore (J. Audio Eng. Soc., 2002) and Chen and Hu (IEEE ICASSP, 2012) are pre-sented, extended and realised as binaural loudness meters. The performance bottlenecks are identified and alleviated by reducing the complexity of the excitation transformation stages. The ef-fects of three parameters (hop size, spectral compression and filter spacing) on model predictions are analysed and discussed within the context of features used by scientists and engineers to quantify and monitor the perceived loudness of music and speech. Parame-ter values are presented and perceptual implications are described. 1
    corecore