211 research outputs found

    AUTOMATIC SUBGROUPING OF MULTITRACK AUDIO

    Get PDF
    Subgrouping is a mixing technique where the outputs of a subset of audio tracks in a multitrack are summed to a single audio bus. This is done so that the mix engineer can apply signal processing to an entire subgroup, speed up the mix work flow and manipu-late a number of audio tracks at once. In this work, we investigate which audio features from a set of 159 can be used to automati-cally subgroup multitrack audio. We determine a subset of audio features from the original 159 audio features to use for automatic subgrouping, by performing feature selection using a Random For-est classifier on a dataset of 54 individual multitracks. We show that by using agglomerative clustering on 5 test multitracks, the entire set of audio features incorrectly clusters 35.08 % of the audio tracks, while the subset of audio features incorrectly clusters only 7.89 % of the audio tracks. Furthermore, we also show that using the entire set of audio features, ten incorrect subgroups are created. However, when using the subset of audio features, only five incor-rect subgroups are created. This indicates that our reduced set of audio features provides a significant increase in classification ac-curacy for the creation of subgroups automatically. 1

    Variation in multitrack mixes : analysis of low-level audio signal features

    Get PDF
    To further the development of intelligent music production tools, towards generating mixes that would realistically be created by a human mix-engineer, it is important to understand what kind of mixes can be created, and are typically created, by human mix-engineers. This paper presents an analysis of 1501 mixes, over 10 different songs, created by mix-engineers. The primary dimensions of variation in the full dataset of mixes were ‘amplitude’, ‘brightness’, ‘bass’ and ‘width’, as determined by feature-extraction and subsequent principal component analysis. The distribution of representative features approximated a normal distribution and this is then used to obtain general trends and tolerance bounds for these features. The results presented here are useful as parametric guidance for intelligent music production systems

    Gaussian Framework for Interference Reduction in Live Recordings

    Get PDF
    Here typical live full-length music recordings are considered. In this scenarios, some instrumental voices are captured by microphones intended to other voices, leading to so-called “interferences”. Reducing this phenomenon is desirable because it opens new possibilities for sound engineers and also it has been proven that it increase performances of music analysis and processing tools (e.g. pitch tracking). In this work we propose an fast NMF-based algorithm to solve this problem.ope

    User-guided rendering of audio objects using an interactive genetic algorithm

    Get PDF
    Object-based audio allows for personalisation of content, perhaps to improve accessibility or to increase quality of experience more generally. This paper describes the design and evaluation of an interactive audio renderer, which is used to optimise an audio mix based on the feedback of the listener. A panel of 14 trained participants were recruited to trial the system. The range of audio mixes produced using the proposed system was comparable to the range of mixes achieved using a traditional fader-based mixing interface. Evaluation using the System Usability Scale showed a low level of physical and mental burden, making this a suitable interface for users with impairments, such as vision and/or mobility

    Analysis of Peer Reviews in Music Production

    Get PDF
    • 

    corecore