95 research outputs found
Automatic Transcription of Bass Guitar Tracks applied for Music Genre Classification and Sound Synthesis
Musiksignale bestehen in der Regel aus einer Überlagerung mehrerer
Einzelinstrumente. Die meisten existierenden Algorithmen zur automatischen
Transkription und Analyse von Musikaufnahmen im Forschungsfeld des Music
Information Retrieval (MIR) versuchen, semantische Information direkt aus
diesen gemischten Signalen zu extrahieren. In den letzten Jahren wurde
häufig beobachtet, dass die Leistungsfähigkeit dieser Algorithmen durch
die Signalüberlagerungen und den daraus resultierenden Informationsverlust
generell limitiert ist. Ein möglicher Lösungsansatz besteht darin,
mittels Verfahren der Quellentrennung die beteiligten Instrumente vor der
Analyse klanglich zu isolieren. Die Leistungsfähigkeit dieser Algorithmen
ist zum aktuellen Stand der Technik jedoch nicht immer ausreichend, um eine
sehr gute Trennung der Einzelquellen zu ermöglichen. In dieser Arbeit
werden daher ausschließlich isolierte Instrumentalaufnahmen untersucht,
die klanglich nicht von anderen Instrumenten überlagert sind. Exemplarisch
werden anhand der elektrischen Bassgitarre auf die Klangerzeugung dieses
Instrumentes hin spezialisierte Analyse- und Klangsynthesealgorithmen
entwickelt und evaluiert.Im ersten Teil der vorliegenden Arbeit wird ein
Algorithmus vorgestellt, der eine automatische Transkription von
Bassgitarrenaufnahmen durchführt. Dabei wird das Audiosignal durch
verschiedene Klangereignisse beschrieben, welche den gespielten Noten auf
dem Instrument entsprechen. Neben den üblichen Notenparametern Anfang,
Dauer, Lautstärke und Tonhöhe werden dabei auch instrumentenspezifische
Parameter wie die verwendeten Spieltechniken sowie die Saiten- und Bundlage
auf dem Instrument automatisch extrahiert. Evaluationsexperimente anhand
zweier neu erstellter Audiodatensätze belegen, dass der vorgestellte
Transkriptionsalgorithmus auf einem Datensatz von realistischen
Bassgitarrenaufnahmen eine höhere Erkennungsgenauigkeit erreichen kann als
drei existierende Algorithmen aus dem Stand der Technik. Die Schätzung der
instrumentenspezifischen Parameter kann insbesondere für isolierte
Einzelnoten mit einer hohen Güte durchgeführt werden.Im zweiten Teil der
Arbeit wird untersucht, wie aus einer Notendarstellung typischer sich
wieder- holender Basslinien auf das Musikgenre geschlossen werden kann.
Dabei werden Audiomerkmale extrahiert, welche verschiedene tonale,
rhythmische, und strukturelle Eigenschaften von Basslinien quantitativ
beschreiben. Mit Hilfe eines neu erstellten Datensatzes von 520 typischen
Basslinien aus 13 verschiedenen Musikgenres wurden drei verschiedene
Ansätze für die automatische Genreklassifikation verglichen. Dabei zeigte
sich, dass mit Hilfe eines regelbasierten Klassifikationsverfahrens nur
Anhand der Analyse der Basslinie eines Musikstückes bereits eine mittlere
Erkennungsrate von 64,8 % erreicht werden konnte.Die Re-synthese der
originalen Bassspuren basierend auf den extrahierten Notenparametern wird
im dritten Teil der Arbeit untersucht. Dabei wird ein neuer
Audiosynthesealgorithmus vorgestellt, der basierend auf dem Prinzip des
Physical Modeling verschiedene Aspekte der für die Bassgitarre
charakteristische Klangerzeugung wie Saitenanregung, Dämpfung, Kollision
zwischen Saite und Bund sowie dem Tonabnehmerverhalten nachbildet.
Weiterhin wird ein parametrischerAudiokodierungsansatz diskutiert, der es
erlaubt, Bassgitarrenspuren nur anhand der ermittel- ten notenweisen
Parameter zu übertragen um sie auf Dekoderseite wieder zu
resynthetisieren. Die Ergebnisse mehrerer Hötest belegen, dass der
vorgeschlagene Synthesealgorithmus eine Re- Synthese von
Bassgitarrenaufnahmen mit einer besseren Klangqualität ermöglicht als die
Übertragung der Audiodaten mit existierenden Audiokodierungsverfahren, die
auf sehr geringe Bitraten ein gestellt sind.Music recordings most often consist of multiple instrument signals, which
overlap in time and frequency. In the field of Music Information Retrieval
(MIR), existing algorithms for the automatic transcription and analysis of
music recordings aim to extract semantic information from mixed audio
signals. In the last years, it was frequently observed that the algorithm
performance is limited due to the signal interference and the resulting
loss of information. One common approach to solve this problem is to first
apply source separation algorithms to isolate the present musical
instrument signals before analyzing them individually. The performance of
source separation algorithms strongly depends on the number of instruments
as well as on the amount of spectral overlap.In this thesis, isolated
instrumental tracks are analyzed in order to circumvent the challenges of
source separation. Instead, the focus is on the development of
instrument-centered signal processing algorithms for music transcription,
musical analysis, as well as sound synthesis. The electric bass guitar is
chosen as an example instrument. Its sound production principles are
closely investigated and considered in the algorithmic design.In the first
part of this thesis, an automatic music transcription algorithm for
electric bass guitar recordings will be presented. The audio signal is
interpreted as a sequence of sound events, which are described by various
parameters. In addition to the conventionally used score-level parameters
note onset, duration, loudness, and pitch, instrument-specific parameters
such as the applied instrument playing techniques and the geometric
position on the instrument fretboard will be extracted. Different
evaluation experiments confirmed that the proposed transcription algorithm
outperformed three state-of-the-art bass transcription algorithms for the
transcription of realistic bass guitar recordings. The estimation of the
instrument-level parameters works with high accuracy, in particular for
isolated note samples.In the second part of the thesis, it will be
investigated, whether the sole analysis of the bassline of a music piece
allows to automatically classify its music genre. Different score-based
audio features will be proposed that allow to quantify tonal, rhythmic, and
structural properties of basslines. Based on a novel data set of 520
bassline transcriptions from 13 different music genres, three approaches
for music genre classification were compared. A rule-based classification
system could achieve a mean class accuracy of 64.8 % by only taking
features into account that were extracted from the bassline of a music
piece.The re-synthesis of a bass guitar recordings using the previously
extracted note parameters will be studied in the third part of this thesis.
Based on the physical modeling of string instruments, a novel sound
synthesis algorithm tailored to the electric bass guitar will be presented.
The algorithm mimics different aspects of the instrument’s sound
production mechanism such as string excitement, string damping, string-fret
collision, and the influence of the electro-magnetic pickup. Furthermore, a
parametric audio coding approach will be discussed that allows to encode
and transmit bass guitar tracks with a significantly smaller bit rate than
conventional audio coding algorithms do. The results of different listening
tests confirmed that a higher perceptual quality can be achieved if the
original bass guitar recordings are encoded and re-synthesized using the
proposed parametric audio codec instead of being encoded using conventional
audio codecs at very low bit rate settings
Estimating Performance Parameters from Electric Guitar Recordings
PhDThe main motivation of this thesis is to explore several techniques for estimating electric
guitar synthesis parameters to replicate the sound of popular guitarists. Many famous guitar
players are recognisable by their distinctive electric guitar tone, and guitar enthusiasts would
like to play or obtain their favourite guitarist’s sound on their own guitars.
This thesis starts by exploring the possibilities of replicating a target guitar sound, given
an input guitar signal, using a digital filter. A preliminary step is taken where a technique is
proposed to transform the sound of a pickup into another on the same electric guitar. A least
squares estimator is used to obtain the coefficients of a finite impulse response (FIR) filter to
transform the sound. The technique yields good results which are supported by a listening
test and a spectral distance measure showing that up to 99% of the difference between input
and target signals is reduced. The robustness of the filters towards changes in repetitions,
plucking positions, dynamics and fret positions are also discussed. A small increase in error
was observed for different repetitions; moderate errors arose when the plucking position and
dynamic were varied; and there were large errors when the training and test data comprised
different notes (fret positions).
Secondly, this thesis explored another possible way to replicate the sound of popular
guitarists in order to overcome the limitations provided by the first approach. Instead of directly
morphing one sound into another, replicating the sound with electric guitar synthesis
provides flexibility that requires some parameters. Three approaches to estimate the pickup
and plucking positions of an electric guitar are discussed in this thesis which are the Spectral
Peaks (SP), Autocorrelation of Spectral Peaks (AC-SP) and Log-correlation of Spectral
Peaks (LC-SP) methods. LC-SP produces the best results with faster computation, where
the median absolute errors for pickup and plucking position estimates are 1.97 mm and 2.73
mm respectively using single pickup data and the errors increased slightly for mixed pickup
data. LC-SP is also shown to be robust towards changes in plucking dynamics and fret positions,
where the median absolute errors for pickup and plucking position estimates are less
than 4 mm. The Polynomial Regression Spectral Flattening (PRSF) method is introduced
to compensate the effects of guitar effects, amplifiers, loudspeakers and microphones. The
accuracy of the estimates is then tested on several guitar signal chains, where the median
absolute errors for pickup and plucking position estimates range from 2.04 mm to 7.83 mm
and 2.98 mm to 27.81 mm respectively
Low-Latency f0 Estimation for the Finger Plucked Electric Bass Guitar Using the Absolute Difference Function
Audio-to-MIDI conversion can be used to allow digital musical control through an analog instrument. Audio-to-MIDI converters rely on fundamental frequency estimators that are usually restricted to a minimum delay of two fundamental periods. This delay is perceptible for the case of bass notes. In this dissertation, we propose a low-latency fundamental frequency estimation method that relies on specific characteristics of the electric bass guitar. By means of physical modeling and signal acquisition, we show that the assumptions of this method are based on the generalization of all electric basses. We evaluated our method in a dataset with musical notes played by diverse bassists. Results show that our method outperforms the Yin method in low-latency settings, which indicates its suitability for low-latency audio-to-MIDI conversion of the electric bass sound
Real-time software electric guitar audio transcription
Guitar audio transcription is the process of generating a human-interpretable musical score from guitar audio. The musical score is presented as guitar tablature, which indicates not only what notes are played, but where they are played on the guitar fretboard. Automatic transcription remains a challenge when dealing with polyphonic sounds. The guitar adds further ambiguity to the transcription problem because the same note can often be played in many ways. In this thesis work, a portable software architecture is presented for processing guitar audio in real time and providing a set of highly probable transcription solutions. Novel algorithms for performing polyphonic pitch detection and generating confidence values for transcription solutions (by which they are ranked) are also presented. Transcription solutions are generated for individual signal windows based on the output of the polyphonic pitch detection algorithm. Confidence values are generated for solutions by analyzing signal properties, fingering difficulty, and proximity to previous highest confidence solutions. The rules used for generating confidence values are based on expert knowledge of the instrument. Performance is measured in terms of algorithm accuracy, latency, and throughput. The correct result is ranked 2.08 (with the top rank being 0) for chords. The general case of various notes over time presents results that require qualitative analysis; the system in general is very susceptible to noise and has a difficult time distinguishing harmonics from actual fundamentals. By allowing the user to seed the system with a ground truth, correct recognition of future states is improved significantly in some cases. The sampling time is 250 ms with an average processing time of 110 ms, giving an average total latency of 360 ms. Throughput is 62.5 sample windows per second. Performance is not processor-bound, enabling high performance on a wide variety of personal computers
- …