7 research outputs found
Good vibrations: Guiding body movements with vibrotactile feedback
We describe the ongoing development of a system to support the teaching of good posture and bowing technique to novice violin players. Using an inertial motion capture system we can track in real-time a playerâs bowing action and how it deviates from a target trajectory set by their music teacher. The system provides real-time vibrotactile feedback on the correctness of the studentâs posture and bowing action. We present the findings of an initial study that shows that vibrotactile feedback can guide arm movements in one and two dimension pointing tasks. The advantages of vibrotactile feedback for teaching basic bowing technique to novice violin players are that it does not place demands on the studentsâ visual and auditory systems which are already heavily involved in the activity of music making, and is understood with little training
Recommended from our members
Towards a real-time system for teaching novices good violin bowing technique
We describe the ongoing development of a system to support the teaching of good posture and bowing technique to novice violin players. Using an inertial motion capture system we can track in real-time: i) a playerâs bowing action (and measure how it deviates from a target trajectory); ii) whether the player is holding their violin correctly. We detail some initial experiments that show that vibrotactile feedback can guide arm movements in one and two dimensions. We then present some preliminary findings from integrating the motion capture and feedback components into a prototype real-time training system. The advantages of vibrotactile feedback are that: i) it does not use the studentsâ visual and auditory systems which are already involved in the activity of music making; ii) it is an intuitive way to guide body movements
SMA Technical Report
Technical report from pilot studies in the Sensing Music-related Actions group. The report presents simple motion sensor technology and issues regarding pre-processing of music-related motion data.
In cognitive music research, ones main focus is the relationship between music and human beings. This involves emotions, moods, perception, expression, interaction with other people, interaction with musical instruments and other interfaces, among many other things. Due to the nature of music as a subjective experience, verbal utterances on these aspects tend to be coloured by the person who makes them. Such utterances are limited by the vocabulary of the person, and by the process of consciously transforming these inner feelings and experiences to words (Leman 2007: 5f). Thus, gesture research has become extensively popular among researchers wanting a deeper understanding of how people interact with music. In this kind of research, several different methods are used, using for example infrared-sensitive cameras (Wiesendanger et al. 2006) or video recordings in combination with MIDI (Jabusch 2006).
This paper presents methods being used in a pilot study for the Sensing Music-related Actions project at the Department of Musicology and the Department of Informatics at the University of Oslo. Here I will discuss the methods for apprehending and analysing gestural data in this project, especially looking into use of sensors for measuring movement and tracking absolute position. In this project, a superior goal is to develop methods for studying gestures in musical performance. In a large view this involves gathering data, analysing the data and organizing the data in such a way that we ourselves and others easily can find and understand the data
The digitally 'Hand Made' object
This article will outline the authorâs investigations of types of computer interfaces in practical three-dimensional design practice. The paper contains a description of two main projects in glass and ceramic tableware design, using a Microscribe G2L digitising arm as an interface to record three-dimensional spatial\ud
design input.\ud
\ud
The article will provide critical reflections on the results of the investigations and will argue that new approaches in digital design interfaces could have relevance in developing design methods which incorporate more physical âhumanâ expressions in a three-dimensional design practice. The research builds on concepts indentified in traditional craft practice as foundations for constructing new types of creative practices based on the use of digital technologies, as outlined by McCullough (1996)
AXMEDIS 2007 Conference Proceedings
The AXMEDIS International Conference series has been established since 2005 and is focused on the research, developments and applications in the cross-media domain, exploring innovative technologies to meet the challenges of the sector. AXMEDIS2007 deals with all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, interoperability, protection and rights management. It addresses the latest developments and future trends of the technologies and their applications, their impact and exploitation within academic, business and industrial communities
Automatic Transcription of Bass Guitar Tracks applied for Music Genre Classification and Sound Synthesis
ï»żMusiksignale bestehen in der Regel aus einer Ăberlagerung mehrerer
Einzelinstrumente. Die meisten existierenden Algorithmen zur automatischen
Transkription und Analyse von Musikaufnahmen im Forschungsfeld des Music
Information Retrieval (MIR) versuchen, semantische Information direkt aus
diesen gemischten Signalen zu extrahieren. In den letzten Jahren wurde
hÀufig beobachtet, dass die LeistungsfÀhigkeit dieser Algorithmen durch
die SignalĂŒberlagerungen und den daraus resultierenden Informationsverlust
generell limitiert ist. Ein möglicher Lösungsansatz besteht darin,
mittels Verfahren der Quellentrennung die beteiligten Instrumente vor der
Analyse klanglich zu isolieren. Die LeistungsfÀhigkeit dieser Algorithmen
ist zum aktuellen Stand der Technik jedoch nicht immer ausreichend, um eine
sehr gute Trennung der Einzelquellen zu ermöglichen. In dieser Arbeit
werden daher ausschlieĂlich isolierte Instrumentalaufnahmen untersucht,
die klanglich nicht von anderen Instrumenten ĂŒberlagert sind. Exemplarisch
werden anhand der elektrischen Bassgitarre auf die Klangerzeugung dieses
Instrumentes hin spezialisierte Analyse- und Klangsynthesealgorithmen
entwickelt und evaluiert.Im ersten Teil der vorliegenden Arbeit wird ein
Algorithmus vorgestellt, der eine automatische Transkription von
Bassgitarrenaufnahmen durchfĂŒhrt. Dabei wird das Audiosignal durch
verschiedene Klangereignisse beschrieben, welche den gespielten Noten auf
dem Instrument entsprechen. Neben den ĂŒblichen Notenparametern Anfang,
Dauer, LautstÀrke und Tonhöhe werden dabei auch instrumentenspezifische
Parameter wie die verwendeten Spieltechniken sowie die Saiten- und Bundlage
auf dem Instrument automatisch extrahiert. Evaluationsexperimente anhand
zweier neu erstellter AudiodatensÀtze belegen, dass der vorgestellte
Transkriptionsalgorithmus auf einem Datensatz von realistischen
Bassgitarrenaufnahmen eine höhere Erkennungsgenauigkeit erreichen kann als
drei existierende Algorithmen aus dem Stand der Technik. Die SchÀtzung der
instrumentenspezifischen Parameter kann insbesondere fĂŒr isolierte
Einzelnoten mit einer hohen GĂŒte durchgefĂŒhrt werden.Im zweiten Teil der
Arbeit wird untersucht, wie aus einer Notendarstellung typischer sich
wieder- holender Basslinien auf das Musikgenre geschlossen werden kann.
Dabei werden Audiomerkmale extrahiert, welche verschiedene tonale,
rhythmische, und strukturelle Eigenschaften von Basslinien quantitativ
beschreiben. Mit Hilfe eines neu erstellten Datensatzes von 520 typischen
Basslinien aus 13 verschiedenen Musikgenres wurden drei verschiedene
AnsĂ€tze fĂŒr die automatische Genreklassifikation verglichen. Dabei zeigte
sich, dass mit Hilfe eines regelbasierten Klassifikationsverfahrens nur
Anhand der Analyse der Basslinie eines MusikstĂŒckes bereits eine mittlere
Erkennungsrate von 64,8 % erreicht werden konnte.Die Re-synthese der
originalen Bassspuren basierend auf den extrahierten Notenparametern wird
im dritten Teil der Arbeit untersucht. Dabei wird ein neuer
Audiosynthesealgorithmus vorgestellt, der basierend auf dem Prinzip des
Physical Modeling verschiedene Aspekte der fĂŒr die Bassgitarre
charakteristische Klangerzeugung wie Saitenanregung, DĂ€mpfung, Kollision
zwischen Saite und Bund sowie dem Tonabnehmerverhalten nachbildet.
Weiterhin wird ein parametrischerAudiokodierungsansatz diskutiert, der es
erlaubt, Bassgitarrenspuren nur anhand der ermittel- ten notenweisen
Parameter zu ĂŒbertragen um sie auf Dekoderseite wieder zu
resynthetisieren. Die Ergebnisse mehrerer Hötest belegen, dass der
vorgeschlagene Synthesealgorithmus eine Re- Synthese von
Bassgitarrenaufnahmen mit einer besseren KlangqualitÀt ermöglicht als die
Ăbertragung der Audiodaten mit existierenden Audiokodierungsverfahren, die
auf sehr geringe Bitraten ein gestellt sind.Music recordings most often consist of multiple instrument signals, which
overlap in time and frequency. In the field of Music Information Retrieval
(MIR), existing algorithms for the automatic transcription and analysis of
music recordings aim to extract semantic information from mixed audio
signals. In the last years, it was frequently observed that the algorithm
performance is limited due to the signal interference and the resulting
loss of information. One common approach to solve this problem is to first
apply source separation algorithms to isolate the present musical
instrument signals before analyzing them individually. The performance of
source separation algorithms strongly depends on the number of instruments
as well as on the amount of spectral overlap.In this thesis, isolated
instrumental tracks are analyzed in order to circumvent the challenges of
source separation. Instead, the focus is on the development of
instrument-centered signal processing algorithms for music transcription,
musical analysis, as well as sound synthesis. The electric bass guitar is
chosen as an example instrument. Its sound production principles are
closely investigated and considered in the algorithmic design.In the first
part of this thesis, an automatic music transcription algorithm for
electric bass guitar recordings will be presented. The audio signal is
interpreted as a sequence of sound events, which are described by various
parameters. In addition to the conventionally used score-level parameters
note onset, duration, loudness, and pitch, instrument-specific parameters
such as the applied instrument playing techniques and the geometric
position on the instrument fretboard will be extracted. Different
evaluation experiments confirmed that the proposed transcription algorithm
outperformed three state-of-the-art bass transcription algorithms for the
transcription of realistic bass guitar recordings. The estimation of the
instrument-level parameters works with high accuracy, in particular for
isolated note samples.In the second part of the thesis, it will be
investigated, whether the sole analysis of the bassline of a music piece
allows to automatically classify its music genre. Different score-based
audio features will be proposed that allow to quantify tonal, rhythmic, and
structural properties of basslines. Based on a novel data set of 520
bassline transcriptions from 13 different music genres, three approaches
for music genre classification were compared. A rule-based classification
system could achieve a mean class accuracy of 64.8 % by only taking
features into account that were extracted from the bassline of a music
piece.The re-synthesis of a bass guitar recordings using the previously
extracted note parameters will be studied in the third part of this thesis.
Based on the physical modeling of string instruments, a novel sound
synthesis algorithm tailored to the electric bass guitar will be presented.
The algorithm mimics different aspects of the instrumentâs sound
production mechanism such as string excitement, string damping, string-fret
collision, and the influence of the electro-magnetic pickup. Furthermore, a
parametric audio coding approach will be discussed that allows to encode
and transmit bass guitar tracks with a significantly smaller bit rate than
conventional audio coding algorithms do. The results of different listening
tests confirmed that a higher perceptual quality can be achieved if the
original bass guitar recordings are encoded and re-synthesized using the
proposed parametric audio codec instead of being encoded using conventional
audio codecs at very low bit rate settings