406 research outputs found
Connected Learning Journeys in Music Production Education
The field of music production education is a challenging one, exploring multiple creative, technical and entrepreneurial disciplines, including music composition, performance electronics, acoustics, musicology, project management and psychology. As a result, students take multiple ‘learning journeys’ on their pathway towards becoming autonomous learners. This paper uniquely evaluates the journey of climbing Bloom’s cognitive domain in the field of music production and gives specific examples that validate teaching music production in higher education through multiple, connected ascents of the framework. Owing to the practical nature of music production, Kolb’s Experiential Learning Model is also considered as a recurring function that is necessary for climbing Bloom’s domain, in order to ensure that learners are equipped for employability and entrepreneurship on graduation. The authors’ own experiences of higher education course delivery, design and development are also reflected upon with reference to Music Production pathways at both the University of Westminster (London, UK) and York St John University (York, UK)
The Resonant Tuning Factor: A New Measure for Quantifying the Setup and Tuning of Cylindrical Drums
A single circular drumhead produces complex and in-harmonic vibration characteristics. However, with cylindrical drums, which have two drumheads coupled by a mass of air, it is possible to manipulate the harmonic relationships through changing the tension of the resonant drumhead. The modal ratio between the fundamental and the batter head overtone therefore provides a unique and quantified characteristic of the drum tuning setup, which has been termed as the Resonant Tuning Factor (RTF). It may be valuable, for example, for percussionists to manipulate the RTF value to a perfect musical fifth, or to simply enable a repeatable tuning setup. This research therefore considers a number of user interfaces for analyzing the RTF and providing a tool for quantitative drum tuning
The Dreaded Mix Sign-Off: handing over to mastering
The final stage of mixing, and indeed the final responsibility of the mix engineer, is usually the handover to mastering, which brings a number of creative and technical considerations. In the traditional approach to music production, mastering is conducted by a specialist audio engineer once the final mixes have been consolidated to a stereo format. In the early days, mixes would be recorded to a physical two-track analog tape that would then be shipped to the mastering engineer. Nowadays it is more common for the stereo mixes to be sent as lossless audio files through an Internet file transfer. The final signing-off of the mix is an intrepid point in the process, requiring the artist, music producer and mix engineer to agree that they have completed the mixes, which can reveal any uncertainties or insecurities that they may bear in relation to the project. The mix sign-off and handover to mastering is therefore seen as a critical and crucial point in the music production process.
Approaches to, and technologies for, mixing and mastering have evolved, as have all aspects of music production. New methods and approaches bring opportunities to simplify and reduce the cost of production, although with the potential for practitioners to inadvertently cut corners and underperform in both creative and technical contexts. Modern processing tools enable mix engineers to also master their own music, and there are a number of arguments for and against the use of mastering techniques at the mixing stage. For example, it can be argued that mix engineers need to take a greater responsibility towards technical attributes such as dynamics and noise cancellation. Whereas, in contrast, the use of mix-bus limiting when generating draft listening copies can confuse and falsify the sign-off process. Furthermore, it may be seen that mastering engineers prefer or are requested to work from mix stems (i.e., a number of consolidated audio tracks that collectively make up the mix), but does that mean they are effectively mixing as well as mastering the songs?
This chapter discusses the critical point of completing the mix and moving towards mastering, that is, it considers the crucial process of ‘signing off’ a mix and reaching agreement between stakeholders that a song is ready for mastering. The discussion draws on the experience and expertise of a number of award-winning mix and mastering engineers through direct discussion and interview, particularly with respect to methods and contemporary practices that are common at the mix-completion stage. The mix and mastering engineers contributing to this chapter are George Massenburg, Mandy Parnell, Ronald Prent, Darcy Proper and Michael Romanowski, whose professional insights give a first-hand reflection on best practice for finalizing the mix and handing over to mastering
Evaluating analog reconstruction performance of transient digital audio workstation signals at high- and standard- resolution sample frequencies
Given that audio signals in many applications are neither predictable nor guaranteed to be repeated (and hence do not deliver an infinite history of data), they do not completely adhere to the ideal sampling theorems presented by Nyquist and Shannon. Digital-to-analogue (DAC) reconstruction theory is hence used to investigate how accurately digital audio workstation signals are actually reconstructed, and explores whether high-resolution sample frequencies (i.e. frequencies above 44.1 kHz) provide a performance advantage. Ideal reconstruction profiles are then evaluated against the actual reconstruction data observed from three pro-audio DACs and at multiple sample frequencies. The test signal is chosen to evaluate the performance of DACs when presented with transient data that approaches the Nyquist sample frequency. This approach is used because it has the potential to yield information on the suggested benefits of higher-than-Nyquist sample and reconstruction approaches in a real-world music production context
The effect of dynamic range compression on the psychoacoustic quality and loudness of commercial music
It is common practice for music productions to be mastered with the aim of increasing the perceived loudness for the listener, allowing one record to stand out from another by delivering an immediate impact and intensity. Since the advent of the Compact Disc in 1980, music has increased in RMS level by up to 20dB. This results in many commercial releases being compressed to a dynamic range of 2–3 dB. Initial findings of this study have determined that amplitude compression adversely affects the audio signal with the introduction of audible artifacts such as sudden gain changes, modulation of the noise floor and signal distortion, all of which appear to be related to the onset of listener fatigue.
In this paper, the history and changes in trends with respect to dynamic range are discussed and findings will be presented and evaluated. Initial experimentation, and both the roadmap and challenges for further and wider research are also described and discussed. The key aim of this research is to quantify the effects (both positive and negative) of dynamic range manipulation on the audio signal and subsequent listener experience. A future goal of this study is to ultimately define recommended standards for the dynamic range levels of mastered music in a similar manner to those associated with the film industry
Interactive Recorded Music: Past, Present and Future
This Engineering Brief charts the story of user-interactivity with recorded music. Audio technologies and creative compositional techniques are discussed with particular regard to scenarios where creativity has driven the demand for technological advance, and vice-versa, where technical advance has enabled new creative-practice approaches. This is contextualized through discussion of relevant implementation in legacy systems, mobile applications, video games, artificial intelligence, and extended realities. In identifying seminal applications of music interactivity from the past and linking them to present capabilities and practices, future trajectories for interactive recorded music are extrapolated
Singer-Songwriter meets Music Production and Studio Technology
Almost every singer-songwriter who aspires to make a living from their craft will be involved in the music recording and production process at some point in their career. Recorded music allows a musician to be able to promote their material remotely, opening up the opportunity of reaching a huge global audience. Recorded music also serves the reflective songwriting process itself and allows an artist to seek professional opportunities and showcase their capabilities to labels, managers and publishers. For professional artists, record production is a gateway to income and success that immediately adds the possibility of new revenue streams.
This chapter focuses on the music production process and a number of related aspects that a professional singer-songwriter can expect to encounter during their career. In particular, core studio production technologies are discussed, as well as opportunities to use music production techniques as an expanded toolset for songwriting itself. The concept of working with a specialist music producer is introduced, alongside common challenges of the recording process, such as critical appraisal and a quest for sonic perfection. Finally, the avenues for using recorded music as a core revenue stream for singer-songwriters are considered, in order to provide a framework for achieving sustainable success as a songwriter and recording artist
Quantitative analysis of streaming protocols for enabling Internet of Things (IoT) audio hardware
Given that traditional music production techniques often incorporate analog audio hardware, the Internet of Things (IoT) presents a unique opportunity to maintain past production workflows. For example, it is possible to enable remote digital connectivity to rare, expensive and bespoke audio systems, as well as unique spaces for use as echo chambers. In the presented research, quantitative testing is conducted to verify the performance of audio streaming platforms. Results show that using a high-speed internet connection, it is possible to stream lossless audio with low distortion, no dropouts and around 30 ms round-trip latency. Therefore, with future integration of audio streaming and IoT control protocols, a new paradigm for remote analog hardware processing in music production could be enabled
- …