112 research outputs found
Recommended from our members
General Process in the Creation of Estruendos and Principal Structural Elements of the Composition
My composition, Estruendos, is a work for large symphonic orchestra, guitar and computer-generated and processed sounds on CD. The work lasts 23 minutes and 45 seconds. My dissertation is composed of two parts: Part One comprises the analysis and Part Two comprises the score. Part One gives a brief background of my compositional dialect and aesthetics. It also includes a discussion of the compositional process and general overview of Estruendos. In addition, it illustrates the primary role the placement of sonic events in time and timbral structure play in the pathos of Estruendos
Deep Learning for Audio Effects Modeling
PhD Thesis.Audio effects modeling is the process of emulating an audio effect unit and seeks
to recreate the sound, behaviour and main perceptual features of an analog reference
device. Audio effect units are analog or digital signal processing systems
that transform certain characteristics of the sound source. These transformations
can be linear or nonlinear, time-invariant or time-varying and with short-term and
long-term memory. Most typical audio effect transformations are based on dynamics,
such as compression; tone such as distortion; frequency such as equalization;
and time such as artificial reverberation or modulation based audio effects.
The digital simulation of these audio processors is normally done by designing
mathematical models of these systems. This is often difficult because it seeks to
accurately model all components within the effect unit, which usually contains
mechanical elements together with nonlinear and time-varying analog electronics.
Most existing methods for audio effects modeling are either simplified or optimized
to a very specific circuit or type of audio effect and cannot be efficiently
translated to other types of audio effects.
This thesis aims to explore deep learning architectures for music signal processing
in the context of audio effects modeling. We investigate deep neural networks
as black-box modeling strategies to solve this task, i.e. by using only input-output
measurements. We propose different DSP-informed deep learning models to emulate
each type of audio effect transformations.
Through objective perceptual-based metrics and subjective listening tests we
explore the performance of these models when modeling various analog audio effects.
Also, we analyze how the given tasks are accomplished and what the models
are actually learning. We show virtual analog models of nonlinear effects, such as
a tube preamplifier; nonlinear effects with memory, such as a transistor-based limiter;
and electromechanical nonlinear time-varying effects, such as a Leslie speaker
cabinet and plate and spring reverberators.
We report that the proposed deep learning architectures represent an improvement
of the state-of-the-art in black-box modeling of audio effects and the respective
directions of future work are given
Spectral analysis and sonification of simulation data generated in a frequency domain experiment
In this thesis, we evaluate the frequency domain approach for data farming and assess the possibility of analyzing complex data sets using data sonification. Data farming applies agent-based models and simulation, computing power, and data analysis and visualization technologies to help answer complex questions in military operations. Sonification is the use of data to generate sound for analysis. We apply a frequency domain experiment (FDE) to a combat simulation and analyze the output data set using spectral analysis. We compare the results from our FDE with those obtained using another experimental design on the same combat scenario. Our results confirm and complement the earlier findings. We then develop an auditory display that uses data sonification to represent the simulation output data set with sound. We consider the simulation results from the FDE as a waveshaping function and generate sounds using sonification software. We characterize the sonified data by their noise, signal, and volume. Qualitatively, the sonified data match the corresponding spectra from the FDE. Therefore, we demonstrate the feasibility of representing simulation data from the FDE with our sonification. Finally, we offer suggestions for future development of a multimodal display that can be used for analyzing complex data sets.http://archive.org/details/spectralnalysisn109459805Lieutenant, United States NavyApproved for public release; distribution is unlimited
Interpretive electronic music systems: a portfolio of compositions
A portfolio of electronic music compositions employing adaptable controllers, graphic
notation, and custom software performance environments.
The portfolio is comprised of scores, recordings, and supporting software and audio files for
the following: Short Circuit; Sample & Hold; Mute | Solo; NCTRN; Radio | Silence; and Please
use the tramps provided.
Supplementary files include alternative audio and video recordings of some of the works
listed above, additional software documentation, and a video recording of a structured
improvisation featuring the controllers and software used in this portfolio
Faculty Senate Chronicle March 18, 1984
Minutes for the regular meeting of The University of Akron Faculty Senate on March 18, 1984
- …