1,479 research outputs found
Estimation of Guitar Fingering and Plucking Controls based on Multimodal Analysis of Motion, Audio and Musical Score
This work presents a method for the extraction of instrumental controls during guitar performances. The method is based on the analysis of multimodal data consisting of a combination of motion capture, audio analysis and musical score. High speed video cameras based on marker identification are used to track the position of finger bones and articulations and audio is recorded with a transducer measuring vibration on the guitar body. The extracted parameters are divided into left hand controls, i.e. fingering (which string and fret is pressed with a left hand finger) and right hand controls, i.e. the plucked string, the plucking finger and the characteristics of the pluck (position, velocity and angles with respect to the string). Controls are estimated based on probability functions of low level features, namely, the plucking instants (i.e. note onsets), the pitch and the distances of the fingers (both hands) to strings and frets. Note onsets are detected via audio analysis, the pitch is extracted from the score and distances are computed from 3D Euclidean Geometry. Results show that by combination of multimodal information, it is possible to estimate such a comprehensive set of control features, with special high performance for the fingering and plucked string estimation. Regarding the plucking finger and the pluck characteristics, their accuracy gets lower but improvements are foreseen including a hand model and the use of high-speed cameras for calibration and evaluation.A. Perez-Carrillo was supported by a Beatriu de Pinos grant 2010 BP-A 00209 by the Catalan Research Agency (AGAUR) and J. Ll. Arcos was supported by ICT -2011-8-318770 and 2009-SGR-1434 projectsPeer reviewe
Automatic Transcription of Bass Guitar Tracks applied for Music Genre Classification and Sound Synthesis
Musiksignale bestehen in der Regel aus einer Überlagerung mehrerer
Einzelinstrumente. Die meisten existierenden Algorithmen zur automatischen
Transkription und Analyse von Musikaufnahmen im Forschungsfeld des Music
Information Retrieval (MIR) versuchen, semantische Information direkt aus
diesen gemischten Signalen zu extrahieren. In den letzten Jahren wurde
häufig beobachtet, dass die Leistungsfähigkeit dieser Algorithmen durch
die Signalüberlagerungen und den daraus resultierenden Informationsverlust
generell limitiert ist. Ein möglicher Lösungsansatz besteht darin,
mittels Verfahren der Quellentrennung die beteiligten Instrumente vor der
Analyse klanglich zu isolieren. Die Leistungsfähigkeit dieser Algorithmen
ist zum aktuellen Stand der Technik jedoch nicht immer ausreichend, um eine
sehr gute Trennung der Einzelquellen zu ermöglichen. In dieser Arbeit
werden daher ausschließlich isolierte Instrumentalaufnahmen untersucht,
die klanglich nicht von anderen Instrumenten überlagert sind. Exemplarisch
werden anhand der elektrischen Bassgitarre auf die Klangerzeugung dieses
Instrumentes hin spezialisierte Analyse- und Klangsynthesealgorithmen
entwickelt und evaluiert.Im ersten Teil der vorliegenden Arbeit wird ein
Algorithmus vorgestellt, der eine automatische Transkription von
Bassgitarrenaufnahmen durchführt. Dabei wird das Audiosignal durch
verschiedene Klangereignisse beschrieben, welche den gespielten Noten auf
dem Instrument entsprechen. Neben den üblichen Notenparametern Anfang,
Dauer, Lautstärke und Tonhöhe werden dabei auch instrumentenspezifische
Parameter wie die verwendeten Spieltechniken sowie die Saiten- und Bundlage
auf dem Instrument automatisch extrahiert. Evaluationsexperimente anhand
zweier neu erstellter Audiodatensätze belegen, dass der vorgestellte
Transkriptionsalgorithmus auf einem Datensatz von realistischen
Bassgitarrenaufnahmen eine höhere Erkennungsgenauigkeit erreichen kann als
drei existierende Algorithmen aus dem Stand der Technik. Die Schätzung der
instrumentenspezifischen Parameter kann insbesondere für isolierte
Einzelnoten mit einer hohen Güte durchgeführt werden.Im zweiten Teil der
Arbeit wird untersucht, wie aus einer Notendarstellung typischer sich
wieder- holender Basslinien auf das Musikgenre geschlossen werden kann.
Dabei werden Audiomerkmale extrahiert, welche verschiedene tonale,
rhythmische, und strukturelle Eigenschaften von Basslinien quantitativ
beschreiben. Mit Hilfe eines neu erstellten Datensatzes von 520 typischen
Basslinien aus 13 verschiedenen Musikgenres wurden drei verschiedene
Ansätze für die automatische Genreklassifikation verglichen. Dabei zeigte
sich, dass mit Hilfe eines regelbasierten Klassifikationsverfahrens nur
Anhand der Analyse der Basslinie eines Musikstückes bereits eine mittlere
Erkennungsrate von 64,8 % erreicht werden konnte.Die Re-synthese der
originalen Bassspuren basierend auf den extrahierten Notenparametern wird
im dritten Teil der Arbeit untersucht. Dabei wird ein neuer
Audiosynthesealgorithmus vorgestellt, der basierend auf dem Prinzip des
Physical Modeling verschiedene Aspekte der für die Bassgitarre
charakteristische Klangerzeugung wie Saitenanregung, Dämpfung, Kollision
zwischen Saite und Bund sowie dem Tonabnehmerverhalten nachbildet.
Weiterhin wird ein parametrischerAudiokodierungsansatz diskutiert, der es
erlaubt, Bassgitarrenspuren nur anhand der ermittel- ten notenweisen
Parameter zu übertragen um sie auf Dekoderseite wieder zu
resynthetisieren. Die Ergebnisse mehrerer Hötest belegen, dass der
vorgeschlagene Synthesealgorithmus eine Re- Synthese von
Bassgitarrenaufnahmen mit einer besseren Klangqualität ermöglicht als die
Übertragung der Audiodaten mit existierenden Audiokodierungsverfahren, die
auf sehr geringe Bitraten ein gestellt sind.Music recordings most often consist of multiple instrument signals, which
overlap in time and frequency. In the field of Music Information Retrieval
(MIR), existing algorithms for the automatic transcription and analysis of
music recordings aim to extract semantic information from mixed audio
signals. In the last years, it was frequently observed that the algorithm
performance is limited due to the signal interference and the resulting
loss of information. One common approach to solve this problem is to first
apply source separation algorithms to isolate the present musical
instrument signals before analyzing them individually. The performance of
source separation algorithms strongly depends on the number of instruments
as well as on the amount of spectral overlap.In this thesis, isolated
instrumental tracks are analyzed in order to circumvent the challenges of
source separation. Instead, the focus is on the development of
instrument-centered signal processing algorithms for music transcription,
musical analysis, as well as sound synthesis. The electric bass guitar is
chosen as an example instrument. Its sound production principles are
closely investigated and considered in the algorithmic design.In the first
part of this thesis, an automatic music transcription algorithm for
electric bass guitar recordings will be presented. The audio signal is
interpreted as a sequence of sound events, which are described by various
parameters. In addition to the conventionally used score-level parameters
note onset, duration, loudness, and pitch, instrument-specific parameters
such as the applied instrument playing techniques and the geometric
position on the instrument fretboard will be extracted. Different
evaluation experiments confirmed that the proposed transcription algorithm
outperformed three state-of-the-art bass transcription algorithms for the
transcription of realistic bass guitar recordings. The estimation of the
instrument-level parameters works with high accuracy, in particular for
isolated note samples.In the second part of the thesis, it will be
investigated, whether the sole analysis of the bassline of a music piece
allows to automatically classify its music genre. Different score-based
audio features will be proposed that allow to quantify tonal, rhythmic, and
structural properties of basslines. Based on a novel data set of 520
bassline transcriptions from 13 different music genres, three approaches
for music genre classification were compared. A rule-based classification
system could achieve a mean class accuracy of 64.8 % by only taking
features into account that were extracted from the bassline of a music
piece.The re-synthesis of a bass guitar recordings using the previously
extracted note parameters will be studied in the third part of this thesis.
Based on the physical modeling of string instruments, a novel sound
synthesis algorithm tailored to the electric bass guitar will be presented.
The algorithm mimics different aspects of the instrument’s sound
production mechanism such as string excitement, string damping, string-fret
collision, and the influence of the electro-magnetic pickup. Furthermore, a
parametric audio coding approach will be discussed that allows to encode
and transmit bass guitar tracks with a significantly smaller bit rate than
conventional audio coding algorithms do. The results of different listening
tests confirmed that a higher perceptual quality can be achieved if the
original bass guitar recordings are encoded and re-synthesized using the
proposed parametric audio codec instead of being encoded using conventional
audio codecs at very low bit rate settings
Music in Health and Diseases
It is well recognized that music is a unique and cost-effective solution for the rehabilitation of patients with cognitive deficits. However, music can also be used as a non-invasive and non-pharmacological intervention modality not only for the management of various disease conditions but also for maintaining good health overall. Music-based therapeutic strategies can be used as complementary methods to existing diagnostic approaches to manage cognitive deficits as well as clinical and physiological abnormalities of individuals in need. This book focuses on various aspects of music and its role in enhancing health and recovering from a disease. Chapters explore music as a healing method across civilizations and measure the effect of music on human physiology and functions
Player–Instrument Interaction Models for Digital Waveguide Synthesis of Guitar: Touch and Collisions
Fretless Architecture: Towards the Development of Original Techniques and Musical Notation Specific to the Fretless Electric Guitar
This article discusses the development of original performance techniques specific to the fretless electric guitar through diverse musical practice(s) and proposes a standardised system of musical notation. An autoethnographic account of personal performance experience is framed with reference to theoretical constructs of performative practice and collaborative creativity. The article focuses on the process behind an evolving practice: combining practical and theoretical aspects of contemporary music performance, and demonstrating that the collation, archiving and subsequent dissemination of both established and emerging techniques into the wider musical community is essential in order to promote the fretless electric guitar as an independent musical force
Caracterização vibroacústica e síntese sonora da viola caipira
Orientadores: José Maria Campos dos Santos, François Gautier, Frédéric AblitzerTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Le Mans UniversitéResumo: A viola caipira é um tipo de viola brasileira amplamente utilizada na música popular. Ela é composta de dez cordas metálicas dispostas em cinco pares, afinadas em uníssono ou oitava. Este trabalho de tese concentra-se na análise das especificidades dos sons musicais produzidos por este instrumento pouco estudado na literatura. A análise dos sons de viola caipira mostra a presença de vibrações simpáticas de cordas, o que resulta em um halo de som, constituindo uma característica perceptiva importante. Os movimentos de cordas dedilhadas são estudados usando uma câmera de alta velocidade, revelando a existência de choques entre cordas que levam a efeitos claramente audíveis. A análise modal das vibrações do corpo realizada por um vibrômetro à laser de varredura e um martelo de impacto automático permite identificar algumas diferenças em relação ao violão clássico. As mobilidades do cavalete também são medidas usando o método do fio quebrante, que é simples de usar e de baixo custo, uma vez que não requer o uso de um sensor de força. Combinadas com uma análise modal de alta resolução (método ESPRIT), tais medidas permitem determinar as formas modais nos pontos de acoplamento entre corda/corpo e assim caracterizar o instrumento. Uma modelagem física baseada em uma abordagem modal híbrida é realizada para fins de síntese sonora. Tal modelagem considera os movimentos das cordas em duas polarizações, os acoplamentos com o corpo e as colisões entre cordas. Este modelo é chamado de modelo híbrido porque combina uma abordagem analítica para descrever as vibrações de cordas e parâmetros experimentais que descrevem o corpo. Um conjunto de simulações no domínio do tempo revelam as principais características da viola caipiraAbstract: The viola caipira is a type of Brazilian guitar widely used in popular music. It consists of ten metallic strings arranged in five pairs, tuned in unison or octave. The thesis work focuses on the analysis of the specificities of musical sounds produced by this instrument, which has been little studied in the literature. The analysis of the motions of plucked strings using a high speed camera shows the existence of sympathetic vibrations, which results in a sound halo, constituting an important perceptive feature. These measurements also reveal the existence of shocks between strings, which lead to very clearly audible consequences. The modal analysis of the body vibrations, carried out by a scanning laser vibrometer and an automatic impact hammer reveals some differences and similarities with the classical guitar. Bridges mobilities are also measured using the wire-breaking method, which is simple to use and inexpensive since it does not require the use of a force sensor. Combined with a high-resolution modal analysis (ESPRIT method), these measurements enable to determine the modal shapes at the string/body coupling points and thus to characterize the instrument. A physical modelling, based on a modal approach, is carried out for sound synthesis purposes. It takes into account the strings motions with two orthogonal polarizations, the couplings with the body and the collisions between strings. This model is called a hybrid model because it combines an analytical approach to describe the vibrations of strings and experimental data describing the body. Simulations in the time domain reveal the main characteristics of the viola caipiraDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânica141214/2013-999999.010073/2014-00CNPQCAPE
- …