52 research outputs found

    Audibility of the timbral effects of inharmonicity in stringed instrument tones

    Get PDF
    Abstract: Listening tests were conducted to find the audibility of inharmonicity in musical sounds produced by stringed instruments, such as the piano or the guitar. The audibility threshold of inharmonicity was measured at five fundamental frequencies. Results show that the detection of inharmonicity is strongly dependent on the fundamental frequency f 0 . A simple model is presented for estimating the threshold as a function of f 0 . The need to implement inharmonicity in digital sound synthesis is discussed

    Physically Informed Subtraction of a String's Resonances from Monophonic, Discretely Attacked Tones : a Phase Vocoder Approach

    Get PDF
    A method for the subtraction of a string's oscillations from monophonic, plucked- or hit-string tones is presented. The remainder of the subtraction is the response of the instrument's body to the excitation, and potentially other sources, such as faint vibrations of other strings, background noises or recording artifacts. In some respects, this method is similar to a stochastic-deterministic decomposition based on Sinusoidal Modeling Synthesis [MQ86, IS87]. However, our method targets string partials expressly, according to a physical model of the string's vibrations described in this thesis. Also, the method sits on a Phase Vocoder scheme. This approach has the essential advantage that the subtraction of the partials can take place \instantly", on a frame-by-frame basis, avoiding the necessity of tracking the partials and therefore availing of the possibility of a real-time implementation. The subtraction takes place in the frequency domain, and a method is presented whereby the computational cost of this process can be reduced through the reduction of a partial's frequency-domain data to its main lobe. In each frame of the Phase Vocoder, the string is encoded as a set of partials, completely described by four constants of frequency, phase, magnitude and exponential decay. These parameters are obtained with a novel method, the Complex Exponential Phase Magnitude Evolution (CSPME), which is a generalisation of the CSPE [SG06] to signals with exponential envelopes and which surpasses the nite resolution of the Discrete Fourier Transform. The encoding obtained is an intuitive representation of the string, suitable to musical processing

    Application of wavelets to analysis of piano tones

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Empty Spaces: Temporal Structures and Timbral Transformations in Gerard Grisey's Modulations and Release for 12 Musicians, an original composition

    Get PDF
    Gérard Grisey’s Modulations (1976-77) is the fourth installment of Les espaces acoustiques, a six-piece cycle inspired by the composer’s analysis of brass instruments’ E-based harmonic spectrum. This dissertation concentrates on Grisey’s approach to the temporal evolution of Modulations, and how his temporal structuring affects perception of the piece’s continuum. The analysis discerns and examines eight temporal structures spread over three larger parts. A temporal structure creates and transforms synthetic timbres, from fashioning their individual transients to designing their overall dynamic evolution. In Modulations, Grisey uses processes of timbral transformations to essentially ‘compose sound.’ These transformations define each of the piece’s temporal structures as well as the overarching structure of the piece. Whereas the three major sections of the work are clearly defined (though elided) by elisions which serve as pointers between them, the eight temporal structures generally overlap, in a manner that Grisey describes as structural polyphony. The analysis shows how this structural polyphony emulates the behavior of partials within a harmonic spectrum, each possessing its own discrete amplitude envelope. The notion of the individual amplitude envelope is then traced on every structural level, from small scale temporal structures where single partials are represented by single instruments—Modulations’ sound objects—to the larger scale temporal structure. Grisey’s timbral transformations prompt expectations, then satisfy or defy them. Temporal structures define what Grisey calls the “skeleton of time,” (objective time) aiming to affect the piece’s “flesh of time”—its becoming. The title of my original piece Release for 12 musicians refers to the gesture of setting free from a state of utmost constriction, be it physical or mental. At the start, this idea is conveyed at the micro level—one instrument playing one note over a very short time unit. It then expands at various levels, the largest being the entire structure of the piece, where two defined sections are noticeable. The first is a gradual buildup of tension, the second a release of that tension in multiple stages

    Pitch-Informed Solo and Accompaniment Separation

    Get PDF
    Das Thema dieser Dissertation ist die Entwicklung eines Systems zur Tonhöhen-informierten Quellentrennung von Musiksignalen in Soloinstrument und Begleitung. Dieses ist geeignet, die dominanten Instrumente aus einem Musikstück zu isolieren, unabhängig von der Art des Instruments, der Begleitung und Stilrichtung. Dabei werden nur einstimmige Melodieinstrumente in Betracht gezogen. Die Musikaufnahmen liegen monaural vor, es kann also keine zusätzliche Information aus der Verteilung der Instrumente im Stereo-Panorama gewonnen werden. Die entwickelte Methode nutzt Tonhöhen-Information als Basis für eine sinusoidale Modellierung der spektralen Eigenschaften des Soloinstruments aus dem Musikmischsignal. Anstatt die spektralen Informationen pro Frame zu bestimmen, werden in der vorgeschlagenen Methode Tonobjekte für die Separation genutzt. Tonobjekt-basierte Verarbeitung ermöglicht es, zusätzlich die Notenanfänge zu verfeinern, transiente Artefakte zu reduzieren, gemeinsame Amplitudenmodulation (Common Amplitude Modulation CAM) einzubeziehen und besser nichtharmonische Elemente der Töne abzuschätzen. Der vorgestellte Algorithmus zur Quellentrennung von Soloinstrument und Begleitung ermöglicht eine Echtzeitverarbeitung und ist somit relevant für den praktischen Einsatz. Ein Experiment zur besseren Modellierung der Zusammenhänge zwischen Magnitude, Phase und Feinfrequenz von isolierten Instrumententönen wurde durchgeführt. Als Ergebnis konnte die Kontinuität der zeitlichen Einhüllenden, die Inharmonizität bestimmter Musikinstrumente und die Auswertung des Phasenfortschritts für die vorgestellte Methode ausgenutzt werden. Zusätzlich wurde ein Algorithmus für die Quellentrennung in perkussive und harmonische Signalanteile auf Basis des Phasenfortschritts entwickelt. Dieser erreicht ein verbesserte perzeptuelle Qualität der harmonischen und perkussiven Signale gegenüber vergleichbaren Methoden nach dem Stand der Technik. Die vorgestellte Methode zur Klangquellentrennung in Soloinstrument und Begleitung wurde zu den Evaluationskampagnen SiSEC 2011 und SiSEC 2013 eingereicht. Dort konnten vergleichbare Ergebnisse im Hinblick auf perzeptuelle Bewertungsmaße erzielt werden. Die Qualität eines Referenzalgorithmus im Hinblick auf den in dieser Dissertation beschriebenen Instrumentaldatensatz übertroffen werden. Als ein Anwendungsszenario für die Klangquellentrennung in Solo und Begleitung wurde ein Hörtest durchgeführt, der die Qualitätsanforderungen an Quellentrennung im Kontext von Musiklernsoftware bewerten sollte. Die Ergebnisse dieses Hörtests zeigen, dass die Solo- und Begleitspur gemäß unterschiedlicher Qualitätskriterien getrennt werden sollten. Die Musiklernsoftware Songs2See integriert die vorgestellte Klangquellentrennung bereits in einer kommerziell erhältlichen Anwendung.This thesis addresses the development of a system for pitch-informed solo and accompaniment separation capable of separating main instruments from music accompaniment regardless of the musical genre of the track, or type of music accompaniment. For the solo instrument, only pitched monophonic instruments were considered in a single-channel scenario where no panning or spatial location information is available. In the proposed method, pitch information is used as an initial stage of a sinusoidal modeling approach that attempts to estimate the spectral information of the solo instrument from a given audio mixture. Instead of estimating the solo instrument on a frame by frame basis, the proposed method gathers information of tone objects to perform separation. Tone-based processing allowed the inclusion of novel processing stages for attack refinement, transient interference reduction, common amplitude modulation (CAM) of tone objects, and for better estimation of non-harmonic elements that can occur in musical instrument tones. The proposed solo and accompaniment algorithm is an efficient method suitable for real-world applications. A study was conducted to better model magnitude, frequency, and phase of isolated musical instrument tones. As a result of this study, temporal envelope smoothness, inharmonicty of musical instruments, and phase expectation were exploited in the proposed separation method. Additionally, an algorithm for harmonic/percussive separation based on phase expectation was proposed. The algorithm shows improved perceptual quality with respect to state-of-the-art methods for harmonic/percussive separation. The proposed solo and accompaniment method obtained perceptual quality scores comparable to other state-of-the-art algorithms under the SiSEC 2011 and SiSEC 2013 campaigns, and outperformed the comparison algorithm on the instrumental dataset described in this thesis.As a use-case of solo and accompaniment separation, a listening test procedure was conducted to assess separation quality requirements in the context of music education. Results from the listening test showed that solo and accompaniment tracks should be optimized differently to suit quality requirements of music education. The Songs2See application was presented as commercial music learning software which includes the proposed solo and accompaniment separation method

    Physical modelling meets machine learning: performing music with a virtual string ensemble

    Get PDF
    This dissertation describes a new method of computer performance of bowed string instruments (violin, viola, cello) using physical simulations and intelligent feedback control. Computer synthesis of music performed by bowed string instruments is a challenging problem. Unlike instruments whose notes originate with a single discrete excitation (e.g., piano, guitar, drum), bowed string instruments are controlled with a continuous stream of excitations (i.e. the bow scraping against the string). Most existing synthesis methods utilize recorded audio samples, which perform quite well for single-excitation instruments but not continuous-excitation instruments. This work improves the realism of synthesis of violin, viola, and cello sound by generating audio through modelling the physical behaviour of the instruments. A string's wave equation is decomposed into 40 modes of vibration, which can be acted upon by three forms of external force: A bow scraping against the string, a left-hand finger pressing down, and/or a right-hand finger plucking. The vibration of each string exerts force against the instrument bridge; these forces are summed and convolved with the instrument body impulse response to create the final audio output. In addition, right-hand haptic output is created from the force of the bow against the string. Physical constants from ten real instruments (five violins, two violas, and three cellos) were measured and used in these simulations. The physical modelling was implemented in a high-performance library capable of simulating audio on a desktop computer one hundred times faster than real-time. The program also generates animated video of the instruments being performed. To perform music with the physical models, a virtual musician interprets the musical score and generates actions which are then fed into the physical model. The resulting audio and haptic signals are examined with a support vector machine, which adjusts the bow force in order to establish and maintain a good timbre. This intelligent feedback control is trained with human input, but after the initial training is completed the virtual musician performs autonomously. A PID controller is used to adjust the position of the left-hand finger to correct any flaws in the pitch. Some performance parameters (initial bow force, force correction, and lifting factors) require an initial value for each string and musical dynamic; these are calibrated automatically using the previously-trained support vector machines. The timbre judgements are retained after each performance and are used to pre-emptively adjust bowing parameters to avoid or mitigate problematic timbre for future performances of the same music. The system is capable of playing sheet music with approximately the same ability level as a human music student after two years of training. Due to the number of instruments measured and the generality of the machine learning, music can be performed with ensembles of up to ten stringed instruments, each with a distinct timbre. This provides a baseline for future work in computer control and expressive music performance of virtual bowed string instruments

    Separation of musical sources and structure from single-channel polyphonic recordings

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Instantaneous Harmonic Analysis and its Applications in Automatic Music Transcription

    Get PDF
    This thesis presents a novel short-time frequency analysis algorithm, namely Instantaneous Harmonic Analysis (IHA), using a decomposition scheme based on sinusoidals. An estimate for instantaneous amplitude and phase elements of the constituent components of real-valued signals with respect to a set of reference frequencies is provided. In the context of musical audio analysis, the instantaneous amplitude is interpreted as presence of the pitch in time. The thesis examines the potential of improving the automated music analysis process by utilizing the proposed algorithm. For that reason, it targets the following two areas: Multiple Fundamental Frequency Estimation (MFFE), and note on-set/off-set detection. The IHA algorithm uses constant-Q filtering by employing Windowed Sinc Filters (WSFs) and a novel phasor construct. An implementation of WSFs in the continuous model is used. A new relation between the Constant-Q Transform (CQT) and WSFs is presented. It is demonstrated that CQT can alternatively be implemented by applying a series of logarithmically scaled WSFs while its window function is adjusted, accordingly. The relation between the window functions is provided as well. A comparison of the proposed IHA algorithm with WSFs and CQT demonstrates that the IHA phasor construct delivers better estimates for instantaneous amplitude and phase lags of the signal components. The thesis also extends the IHA algorithm by employing a generalized kernel function, which in nature, yields a non-orthonormal basis. The kernel function represents the timbral information and is used in the MFFE process. An effective algorithm is proposed to overcome the non-orthonormality issue of the decomposition scheme. To examine the performance improvement of the note on-set/off-set detection process, the proposed algorithm is used in the context of Automatic Music Transcription (AMT). A prototype of an audioto-MIDI system is developed and applied on synthetic and real music signals. The results of the experiments on real and synthetic music signals are reported. Additionally, a multi-dimensional generalization of the IHA algorithm is presented. The IHA phasor construct is extended into the hyper-complex space, in order to deliver the instantaneous amplitude and multiple phase elements for each dimension
    corecore