236 research outputs found
Perceptual Mixing for Musical Production
PhDA general model of music mixing is developed, which enables a mix to be evaluated as a set
of acoustic signals. A second model describes the mixing process as an optimisation problem,
in which the errors are evaluated by comparing sound features of a mix with those of a reference
mix, and the parameters are the controls on the mixing console. Initial focus is placed on
live mixing, where the practical issues of: live acoustic sources, multiple listeners, and acoustic
feedback, increase the technical burden on the mixing engineer. Using the two models, a system
is demonstrated that takes as input reference mixes, and automatically sets the controls on the
mixing console to recreate their objective, acoustic sound features for all listeners, taking into
account the practical issues outlined above. This reduces the complexity of mixing live music to
that of recorded music, and unifies future mixing research.
Sound features evaluated from audio signals are shown to be unsuitable for describing a mix,
because they do not incorporate the effects of listening conditions, or masking interactions between
sounds. Psychophysical test methods are employed to develop a new perceptual sound
feature, termed the loudness balance, which is the first loudness feature to be validated for musical
sounds. A novel, perceptual mixing system is designed, which allows users to directly
control the loudness balance of the sounds they are mixing, for both live and recorded music,
and which can be extended to incorporate other perceptual features. The perceptual mixer is also
employed as an analytical tool, to allow direct measurement of mixing best practice, to provide
fully-automatic mixing functionality, and is shown to be an improvement over current heuristic
models. Based on the conclusions of the work, a framework for future automatic mixing is
provided, centred on perceptual sound features that are validated using psychophysical method
Spatial auditory display for acoustics and music collections
PhDThis thesis explores how audio can be better incorporated into how people access
information and does so by developing approaches for creating three-dimensional audio
environments with low processing demands. This is done by investigating three research
questions.
Mobile applications have processor and memory requirements that restrict the
number of concurrent static or moving sound sources that can be rendered with binaural
audio. Is there a more e cient approach that is as perceptually accurate as the traditional
method? This thesis concludes that virtual Ambisonics is an ef cient and accurate means
to render a binaural auditory display consisting of noise signals placed on the horizontal
plane without head tracking. Virtual Ambisonics is then more e cient than convolution
of HRTFs if more than two sound sources are concurrently rendered or if movement of
the sources or head tracking is implemented.
Complex acoustics models require signi cant amounts of memory and processing. If
the memory and processor loads for a model are too large for a particular device, that
model cannot be interactive in real-time. What steps can be taken to allow a complex
room model to be interactive by using less memory and decreasing the computational
load? This thesis presents a new reverberation model based on hybrid reverberation
which uses a collection of B-format IRs. A new metric for determining the mixing
time of a room is developed and interpolation between early re
ections is investigated.
Though hybrid reverberation typically uses a recursive lter such as a FDN for the late
reverberation, an average late reverberation tail is instead synthesised for convolution
reverberation.
Commercial interfaces for music search and discovery use little aural information
even though the information being sought is audio. How can audio be used in
interfaces for music search and discovery? This thesis looks at 20 interfaces and
determines that several themes emerge from past interfaces. These include using a two
or three-dimensional space to explore a music collection, allowing concurrent playback of
multiple sources, and tools such as auras to control how much information is presented. A
new interface, the amblr, is developed because virtual two-dimensional spaces populated
by music have been a common approach, but not yet a perfected one. The amblr is also
interpreted as an art installation which was visited by approximately 1000 people over 5
days. The installation maps the virtual space created by the amblr to a physical space
Real-time sound spatialization, software design and implementation.
'Real-time Sound Spatialization, Software Design and Implementation' explores real-time spatialization signal processing for the sound artist. The thesis is based around the production of two prototype software projects, both of which are examined in design and implementation.
The first project examines a conceptual method for performance based spatialization mixing which aims to expand existing analogue designs. 'Super Diffuse' , proven performance grade software and the encompassing M2 system, is submitted, for model evaluation and example.
The second project focuses on Physical Modelling Synthesis and introduces 'Source Ray Pickup Interactions' as a tool for packaging real-time spatialization digital signal processing. Submitted with the theoretical model is the 'Ricochet' software, an implementation of 'Source Ray Pickup Interaction'. 'Ricochet' serves as a model evaluation tool and example of implementation
Object-based audio for interactive football broadcast
An end-to-end AV broadcast system providing an immersive, interactive experience for live events is the development aim for the EU FP7 funded project, FascinatE. The project has developed real time audio object event detection and localisation, scene modelling and processing methods for multimedia data including 3D audio, which will allow users to navigate the event by creating their own unique user-defined scene. As part of the first implementation of the system a test shoot was carried out capturing a live Premier League football game and methods have been developed to detect, analyse, extract and localise salient audio events from a range of sensors and represent them within an audio scene in order to allow free navigation within the scene
The creation of a binaural spatialization tool
The main focus of the research presented within this thesis is, as the title suggests, binaural spatialization.
Binaural technology and, especially, the binaural recording technique are not particu-larly recent. Nevertheless, the interest in this technology has lately become substantial due to the increase in the calculation power of personal computers, which started to allow the complete and accurate real-time simulation of three-dimensional sound-fields over headphones.
The goals of this body of research have been determined in order to provide elements of novelty and of contribution to the state of the art in the field of binaural spatialization. A brief summary of these is found in the following list:
• The development and implementation of a binaural spatialization technique with Distance Simulation, based on the individual simulation of the distance cues and Binaural Reverb, in turn based on the weighted mix between the signals convolved with the different HRIR and BRIR sets;
• The development and implementation of a characterization process for modifying a BRIR set in order to simulate different environments with different characteristics in terms of frequency response and reverb time;
• The creation of a real-time and offline binaural spatialization application, imple-menting the techniques cited in the previous points, and including a set of multichannel(and Ambisonics)-to-binaural conversion tools.
• The performance of a perceptual evaluation stage to verify the effectiveness, realism, and quality of the techniques developed, and
• The application and use of the developed tools within both scientific and artistic “case studies”.
In the following chapters, sections, and subsections, the research performed between January 2006 and March 2010 will be described, outlining the different stages before, during, and after the development of the software platform, analysing the results of the perceptual evaluations and drawing conclusions that could, in the future, be considered the starting point for new and innovative research projects
Effects of errorless learning on the acquisition of velopharyngeal movement control
Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio
Towards a better understanding of mix engineering
PhDThis thesis explores how the study of realistic mixes can expand current knowledge about multitrack music mixing. An essential component of music production, mixing remains an esoteric matter with few established best practices. Research on the topic is challenged by a lack of suitable datasets, and consists primarily of controlled studies focusing on a single type of signal processing. However, considering one of these processes in isolation neglects the multidimensional nature of mixing. For this reason, this work presents an analysis and evaluation of real-life mixes, demonstrating that it is a viable and even necessary approach to learn more about how mixes are created and perceived.
Addressing the need for appropriate data, a database of 600 multitrack audio recordings is introduced, and mixes are produced by skilled engineers for a selection of songs. This corpus is subjectively evaluated by 33 expert listeners, using a new framework tailored to the requirements of comparison of musical signal processing.
By studying the relationship between these assessments and objective audio features, previous results are confirmed or revised, new rules are unearthed, and descriptive terms can be defined. In particular, it is shown that examples of inadequate processing, combined with subjective evaluation, are essential in revealing the impact of mix processes on perception. As a case study, the percept `reverberation amount' is ex-pressed as a function of two objective measures, and a range of acceptable values can be delineated.
To establish the generality of these findings, the experiments are repeated with an expanded set of 180 mixes, assessed by 150 subjects with varying levels of experience from seven different locations in five countries. This largely confirms initial findings, showing few distinguishable trends between groups. Increasing experience of the listener results in a larger proportion of critical and specific statements, and agreement with other experts.Yamaha Corporation, the Audio Engineering Society, Harman International Industries, the Engineering and Physical Sciences Research Council, the Association of British Turkish Academics, and Queen Mary University of London's School of Electronic Engineering and Computer Scienc
Audio for Virtual, Augmented and Mixed Realities: Proceedings of ICSA 2019 ; 5th International Conference on Spatial Audio ; September 26th to 28th, 2019, Ilmenau, Germany
The ICSA 2019 focuses on a multidisciplinary bringing together of developers, scientists, users, and content creators of and for spatial audio systems and services. A special focus is on audio for so-called virtual, augmented, and mixed realities.
The fields of ICSA 2019 are: - Development and scientific investigation of technical systems and services for spatial audio recording, processing and reproduction / - Creation of content for reproduction via spatial audio systems and services / - Use and application of spatial audio systems and content presentation services / - Media impact of content and spatial audio systems and services from the point of view of media science. The ICSA 2019 is organized by VDT and TU Ilmenau with support of Fraunhofer Institute for Digital Media Technology IDMT
- …