209 research outputs found

    Physically Informed Subtraction of a String's Resonances from Monophonic, Discretely Attacked Tones : a Phase Vocoder Approach

    Get PDF
    A method for the subtraction of a string's oscillations from monophonic, plucked- or hit-string tones is presented. The remainder of the subtraction is the response of the instrument's body to the excitation, and potentially other sources, such as faint vibrations of other strings, background noises or recording artifacts. In some respects, this method is similar to a stochastic-deterministic decomposition based on Sinusoidal Modeling Synthesis [MQ86, IS87]. However, our method targets string partials expressly, according to a physical model of the string's vibrations described in this thesis. Also, the method sits on a Phase Vocoder scheme. This approach has the essential advantage that the subtraction of the partials can take place \instantly", on a frame-by-frame basis, avoiding the necessity of tracking the partials and therefore availing of the possibility of a real-time implementation. The subtraction takes place in the frequency domain, and a method is presented whereby the computational cost of this process can be reduced through the reduction of a partial's frequency-domain data to its main lobe. In each frame of the Phase Vocoder, the string is encoded as a set of partials, completely described by four constants of frequency, phase, magnitude and exponential decay. These parameters are obtained with a novel method, the Complex Exponential Phase Magnitude Evolution (CSPME), which is a generalisation of the CSPE [SG06] to signals with exponential envelopes and which surpasses the nite resolution of the Discrete Fourier Transform. The encoding obtained is an intuitive representation of the string, suitable to musical processing

    A hybrid keyboard-guitar interface using capacitive touch sensing and physical modeling

    Get PDF
    This paper was presented at the 9th Sound and Music Computing Conference, Copenhagen, Denmark.This paper presents a hybrid interface based on a touch- sensing keyboard which gives detailed expressive control over a physically-modeled guitar. Physical modeling al- lows realistic guitar synthesis incorporating many expres- sive dimensions commonly employed by guitarists, includ- ing pluck strength and location, plectrum type, hand damp- ing and string bending. Often, when a physical model is used in performance, most control dimensions go unused when the interface fails to provide a way to intuitively con- trol them. Techniques as foundational as strumming lack a natural analog on the MIDI keyboard, and few digital controllers provide the independent control of pitch, vol- ume and timbre that even novice guitarists achieve. Our interface combines gestural aspects of keyboard and guitar playing. Most dimensions of guitar technique are control- lable polyphonically, some of them continuously within each note. Mappings are evaluated in a user study of key- boardists and guitarists, and the results demonstrate its playa- bility by performers of both instruments

    Caracterização vibroacústica e síntese sonora da viola caipira

    Get PDF
    Orientadores: José Maria Campos dos Santos, François Gautier, Frédéric AblitzerTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Le Mans UniversitéResumo: A viola caipira é um tipo de viola brasileira amplamente utilizada na música popular. Ela é composta de dez cordas metálicas dispostas em cinco pares, afinadas em uníssono ou oitava. Este trabalho de tese concentra-se na análise das especificidades dos sons musicais produzidos por este instrumento pouco estudado na literatura. A análise dos sons de viola caipira mostra a presença de vibrações simpáticas de cordas, o que resulta em um halo de som, constituindo uma característica perceptiva importante. Os movimentos de cordas dedilhadas são estudados usando uma câmera de alta velocidade, revelando a existência de choques entre cordas que levam a efeitos claramente audíveis. A análise modal das vibrações do corpo realizada por um vibrômetro à laser de varredura e um martelo de impacto automático permite identificar algumas diferenças em relação ao violão clássico. As mobilidades do cavalete também são medidas usando o método do fio quebrante, que é simples de usar e de baixo custo, uma vez que não requer o uso de um sensor de força. Combinadas com uma análise modal de alta resolução (método ESPRIT), tais medidas permitem determinar as formas modais nos pontos de acoplamento entre corda/corpo e assim caracterizar o instrumento. Uma modelagem física baseada em uma abordagem modal híbrida é realizada para fins de síntese sonora. Tal modelagem considera os movimentos das cordas em duas polarizações, os acoplamentos com o corpo e as colisões entre cordas. Este modelo é chamado de modelo híbrido porque combina uma abordagem analítica para descrever as vibrações de cordas e parâmetros experimentais que descrevem o corpo. Um conjunto de simulações no domínio do tempo revelam as principais características da viola caipiraAbstract: The viola caipira is a type of Brazilian guitar widely used in popular music. It consists of ten metallic strings arranged in five pairs, tuned in unison or octave. The thesis work focuses on the analysis of the specificities of musical sounds produced by this instrument, which has been little studied in the literature. The analysis of the motions of plucked strings using a high speed camera shows the existence of sympathetic vibrations, which results in a sound halo, constituting an important perceptive feature. These measurements also reveal the existence of shocks between strings, which lead to very clearly audible consequences. The modal analysis of the body vibrations, carried out by a scanning laser vibrometer and an automatic impact hammer reveals some differences and similarities with the classical guitar. Bridges mobilities are also measured using the wire-breaking method, which is simple to use and inexpensive since it does not require the use of a force sensor. Combined with a high-resolution modal analysis (ESPRIT method), these measurements enable to determine the modal shapes at the string/body coupling points and thus to characterize the instrument. A physical modelling, based on a modal approach, is carried out for sound synthesis purposes. It takes into account the strings motions with two orthogonal polarizations, the couplings with the body and the collisions between strings. This model is called a hybrid model because it combines an analytical approach to describe the vibrations of strings and experimental data describing the body. Simulations in the time domain reveal the main characteristics of the viola caipiraDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânica141214/2013-999999.010073/2014-00CNPQCAPE

    Automatic Transcription of Bass Guitar Tracks applied for Music Genre Classification and Sound Synthesis

    Get PDF
    Musiksignale bestehen in der Regel aus einer Überlagerung mehrerer Einzelinstrumente. Die meisten existierenden Algorithmen zur automatischen Transkription und Analyse von Musikaufnahmen im Forschungsfeld des Music Information Retrieval (MIR) versuchen, semantische Information direkt aus diesen gemischten Signalen zu extrahieren. In den letzten Jahren wurde häufig beobachtet, dass die Leistungsfähigkeit dieser Algorithmen durch die Signalüberlagerungen und den daraus resultierenden Informationsverlust generell limitiert ist. Ein möglicher Lösungsansatz besteht darin, mittels Verfahren der Quellentrennung die beteiligten Instrumente vor der Analyse klanglich zu isolieren. Die Leistungsfähigkeit dieser Algorithmen ist zum aktuellen Stand der Technik jedoch nicht immer ausreichend, um eine sehr gute Trennung der Einzelquellen zu ermöglichen. In dieser Arbeit werden daher ausschließlich isolierte Instrumentalaufnahmen untersucht, die klanglich nicht von anderen Instrumenten überlagert sind. Exemplarisch werden anhand der elektrischen Bassgitarre auf die Klangerzeugung dieses Instrumentes hin spezialisierte Analyse- und Klangsynthesealgorithmen entwickelt und evaluiert.Im ersten Teil der vorliegenden Arbeit wird ein Algorithmus vorgestellt, der eine automatische Transkription von Bassgitarrenaufnahmen durchführt. Dabei wird das Audiosignal durch verschiedene Klangereignisse beschrieben, welche den gespielten Noten auf dem Instrument entsprechen. Neben den üblichen Notenparametern Anfang, Dauer, Lautstärke und Tonhöhe werden dabei auch instrumentenspezifische Parameter wie die verwendeten Spieltechniken sowie die Saiten- und Bundlage auf dem Instrument automatisch extrahiert. Evaluationsexperimente anhand zweier neu erstellter Audiodatensätze belegen, dass der vorgestellte Transkriptionsalgorithmus auf einem Datensatz von realistischen Bassgitarrenaufnahmen eine höhere Erkennungsgenauigkeit erreichen kann als drei existierende Algorithmen aus dem Stand der Technik. Die Schätzung der instrumentenspezifischen Parameter kann insbesondere für isolierte Einzelnoten mit einer hohen Güte durchgeführt werden.Im zweiten Teil der Arbeit wird untersucht, wie aus einer Notendarstellung typischer sich wieder- holender Basslinien auf das Musikgenre geschlossen werden kann. Dabei werden Audiomerkmale extrahiert, welche verschiedene tonale, rhythmische, und strukturelle Eigenschaften von Basslinien quantitativ beschreiben. Mit Hilfe eines neu erstellten Datensatzes von 520 typischen Basslinien aus 13 verschiedenen Musikgenres wurden drei verschiedene Ansätze für die automatische Genreklassifikation verglichen. Dabei zeigte sich, dass mit Hilfe eines regelbasierten Klassifikationsverfahrens nur Anhand der Analyse der Basslinie eines Musikstückes bereits eine mittlere Erkennungsrate von 64,8 % erreicht werden konnte.Die Re-synthese der originalen Bassspuren basierend auf den extrahierten Notenparametern wird im dritten Teil der Arbeit untersucht. Dabei wird ein neuer Audiosynthesealgorithmus vorgestellt, der basierend auf dem Prinzip des Physical Modeling verschiedene Aspekte der für die Bassgitarre charakteristische Klangerzeugung wie Saitenanregung, Dämpfung, Kollision zwischen Saite und Bund sowie dem Tonabnehmerverhalten nachbildet. Weiterhin wird ein parametrischerAudiokodierungsansatz diskutiert, der es erlaubt, Bassgitarrenspuren nur anhand der ermittel- ten notenweisen Parameter zu übertragen um sie auf Dekoderseite wieder zu resynthetisieren. Die Ergebnisse mehrerer Hötest belegen, dass der vorgeschlagene Synthesealgorithmus eine Re- Synthese von Bassgitarrenaufnahmen mit einer besseren Klangqualität ermöglicht als die Übertragung der Audiodaten mit existierenden Audiokodierungsverfahren, die auf sehr geringe Bitraten ein gestellt sind.Music recordings most often consist of multiple instrument signals, which overlap in time and frequency. In the field of Music Information Retrieval (MIR), existing algorithms for the automatic transcription and analysis of music recordings aim to extract semantic information from mixed audio signals. In the last years, it was frequently observed that the algorithm performance is limited due to the signal interference and the resulting loss of information. One common approach to solve this problem is to first apply source separation algorithms to isolate the present musical instrument signals before analyzing them individually. The performance of source separation algorithms strongly depends on the number of instruments as well as on the amount of spectral overlap.In this thesis, isolated instrumental tracks are analyzed in order to circumvent the challenges of source separation. Instead, the focus is on the development of instrument-centered signal processing algorithms for music transcription, musical analysis, as well as sound synthesis. The electric bass guitar is chosen as an example instrument. Its sound production principles are closely investigated and considered in the algorithmic design.In the first part of this thesis, an automatic music transcription algorithm for electric bass guitar recordings will be presented. The audio signal is interpreted as a sequence of sound events, which are described by various parameters. In addition to the conventionally used score-level parameters note onset, duration, loudness, and pitch, instrument-specific parameters such as the applied instrument playing techniques and the geometric position on the instrument fretboard will be extracted. Different evaluation experiments confirmed that the proposed transcription algorithm outperformed three state-of-the-art bass transcription algorithms for the transcription of realistic bass guitar recordings. The estimation of the instrument-level parameters works with high accuracy, in particular for isolated note samples.In the second part of the thesis, it will be investigated, whether the sole analysis of the bassline of a music piece allows to automatically classify its music genre. Different score-based audio features will be proposed that allow to quantify tonal, rhythmic, and structural properties of basslines. Based on a novel data set of 520 bassline transcriptions from 13 different music genres, three approaches for music genre classification were compared. A rule-based classification system could achieve a mean class accuracy of 64.8 % by only taking features into account that were extracted from the bassline of a music piece.The re-synthesis of a bass guitar recordings using the previously extracted note parameters will be studied in the third part of this thesis. Based on the physical modeling of string instruments, a novel sound synthesis algorithm tailored to the electric bass guitar will be presented. The algorithm mimics different aspects of the instrument’s sound production mechanism such as string excitement, string damping, string-fret collision, and the influence of the electro-magnetic pickup. Furthermore, a parametric audio coding approach will be discussed that allows to encode and transmit bass guitar tracks with a significantly smaller bit rate than conventional audio coding algorithms do. The results of different listening tests confirmed that a higher perceptual quality can be achieved if the original bass guitar recordings are encoded and re-synthesized using the proposed parametric audio codec instead of being encoded using conventional audio codecs at very low bit rate settings

    Music in Health and Diseases

    Get PDF
    It is well recognized that music is a unique and cost-effective solution for the rehabilitation of patients with cognitive deficits. However, music can also be used as a non-invasive and non-pharmacological intervention modality not only for the management of various disease conditions but also for maintaining good health overall. Music-based therapeutic strategies can be used as complementary methods to existing diagnostic approaches to manage cognitive deficits as well as clinical and physiological abnormalities of individuals in need. This book focuses on various aspects of music and its role in enhancing health and recovering from a disease. Chapters explore music as a healing method across civilizations and measure the effect of music on human physiology and functions

    Designing and Composing for Interdependent Collaborative Performance with Physics-Based Virtual Instruments

    Get PDF
    Interdependent collaboration is a system of live musical performance in which performers can directly manipulate each other’s musical outcomes. While most collaborative musical systems implement electronic communication channels between players that allow for parameter mappings, remote transmissions of actions and intentions, or exchanges of musical fragments, they interrupt the energy continuum between gesture and sound, breaking our cognitive representation of gesture to sound dynamics. Physics-based virtual instruments allow for acoustically and physically plausible behaviors that are related to (and can be extended beyond) our experience of the physical world. They inherently maintain and respect a representation of the gesture to sound energy continuum. This research explores the design and implementation of custom physics-based virtual instruments for realtime interdependent collaborative performance. It leverages the inherently physically plausible behaviors of physics-based models to create dynamic, nuanced, and expressive interconnections between performers. Design considerations, criteria, and frameworks are distilled from the literature in order to develop three new physics-based virtual instruments and associated compositions intended for dissemination and live performance by the electronic music and instrumental music communities. Conceptual, technical, and artistic details and challenges are described, and reflections and evaluations by the composer-designer and performers are documented

    Re-Sonification of Objects, Events, and Environments

    Get PDF
    abstract: Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.Dissertation/ThesisPh.D. Electrical Engineering 201

    Spatially distributed computational modeling of a nonlinear vibrating string

    Get PDF
    Värähtelevän kielen epälineaarinen käyttäytyminen saa monissa kielisoittimissa aikaan soittimelle luonteenomaisen ja helposti tunnistettavan äänen. Laadukkaan kielisoitinsynteesin vuoksi onkin tärkeää, että nykyaikaiset äänisynteesimenetelmät ottavat huomioon myös kielten epälineaarisuudet. Tässä diplomityössä esitellään kaksi uutta synteesimenetelmää, jotka fysikaalisen mallinnuksen avulla simuloivat epälineaarisia näpättyjä kieliä paikkajakautuneesti, keskittyen jännitysmodulaation tuottamiin epälineaarisuuksiin. Toinen menetelmistä käyttää hajautettuja murtoviivesuotimia digitaalisen aaltojohtomallin viivesilmukan pituuden ajonaikaisessa virittämisessä, kun taas toinen hyödyntää murtoviivesuotimia äärelliseen erotukseen pohjautuvan mallin aikaresoluution muuttamisessa ajon aikana. Jännitysmodulaation suuruus arvioidaan kummankin mallin tapauksessa jokaisella aika-askeleella kielen pidentymästä. Molempien mallien simulaatiotulokset esitellään ja niitä verrataan toisiinsa sekä myös mitattuihin arvoihin. Epälineaarisen aaltojohtomallin avulla on toteutettu reaaliaikainen kantelemalli.Nonlinearities in string instruments are responsible for several interesting acoustical features, resulting in characteristic and easily recognizable tones. For this reason, modern synthesis models have to be capable of modeling this nonlinear behavior, when high quality results are desired. This thesis presents two novel physical modeling algorithms for simulating the tension modulation nonlinearity in plucked strings in a spatially distributed manner. The first method uses fractional delay filters within a digital waveguide structure, allowing the length of the string to be modulated during run time. The second method uses a nonlinear finite difference approach, where the string state is approximated between sampling instants also using fractional delay filters, thus allowing run-time modulation of the temporal sampling location. The magnitude of the tension modulation is evaluated from the elongation of the string at every time step in both cases. Simulation results of the two models are presented and compared. Real-time sound synthesis of the kantele, a traditional Finnish plucked-string instrument with strong effect of tension modulation, has been implemented using the nonlinear digital waveguide algorithm
    • …
    corecore