7 research outputs found

    Kompleksitas Teknik Harmonik dalam Gitar Klasik

    Get PDF
    Penelitian ini didasari oleh latar belakang penulis dalam melihat fakta di lapangan bahwa gitaris pada tingkat kemampuan menengah hingga atas masih sering kesulitan bahkan gagal dalam memainkan teknik harmonik. Maka penelitian ini bertujuan untuk mendeskripsikan kompleksitas teknik harmonik, menemukan penyebabnya, serta memberikan solusi terhadap kerumitan dan kesulitan yang ada. Penelitian ini menggunakan teknik pengumpulan data trianggulasi (observasi, wawancara semi terstruktur, dan dokumentasi). Lalu data yang diperoleh dianalisis menggunakan metode reduksi data, penyajian data dan penarikan kesimpulan. Hasil penelitian menunjukkan bahwa teknik harmonik dalam gitar klasik mempunyai kompleksitasnya sendiri. Dari kompleksitas tersebut, maka penulis merumuskan formulasi untuk dapat menjadi solusi atas kompleksitas tersebut. Penulis membuat 2 macam penawaran formulasi yaitu: formulasi untuk penguasaan teknik harmonik dan formulasi untuk penguasaan teknik harmonik dalam lagu. Formulasi ini berupa langkah-langkah yang perlu gitaris lakukan guna memecahkan masalah yang dihadapi dalam memainkan teknik harmonik

    Exploring Pitch and Timbre through 3D Spaces: Embodied Models in Virtual Reality as a Basis for Performance Systems Design

    Get PDF
    Our paper builds on an ongoing collaboration between theorists and practitioners within the computer music community, with a specific focus on three-dimensional environments as an incubator for performance systems design. In particular, we are concerned with how to provide accessible means of controlling spatialization and timbral shaping in an integrated manner through the collection of performance data from various modalities from an electric guitar with a multichannel audio output. This paper will focus specifically on the combination of pitch data treated within tonal models and the detection of physical performance gestures using timbral feature extraction algorithms. We discuss how these tracked gestures may be connected to concepts and dynamic relationships from embodied cognition, expanding on performative models for pitch and timbre spaces. Finally, we explore how these ideas support connections between sonic, formal and performative dimensions. This includes instrumental technique detection scenes and mapping strategies aimed at bridging music performance gestures across physical and conceptual planes

    Automatic Transcription of Bass Guitar Tracks applied for Music Genre Classification and Sound Synthesis

    Get PDF
    Musiksignale bestehen in der Regel aus einer Überlagerung mehrerer Einzelinstrumente. Die meisten existierenden Algorithmen zur automatischen Transkription und Analyse von Musikaufnahmen im Forschungsfeld des Music Information Retrieval (MIR) versuchen, semantische Information direkt aus diesen gemischten Signalen zu extrahieren. In den letzten Jahren wurde häufig beobachtet, dass die Leistungsfähigkeit dieser Algorithmen durch die Signalüberlagerungen und den daraus resultierenden Informationsverlust generell limitiert ist. Ein möglicher Lösungsansatz besteht darin, mittels Verfahren der Quellentrennung die beteiligten Instrumente vor der Analyse klanglich zu isolieren. Die Leistungsfähigkeit dieser Algorithmen ist zum aktuellen Stand der Technik jedoch nicht immer ausreichend, um eine sehr gute Trennung der Einzelquellen zu ermöglichen. In dieser Arbeit werden daher ausschließlich isolierte Instrumentalaufnahmen untersucht, die klanglich nicht von anderen Instrumenten überlagert sind. Exemplarisch werden anhand der elektrischen Bassgitarre auf die Klangerzeugung dieses Instrumentes hin spezialisierte Analyse- und Klangsynthesealgorithmen entwickelt und evaluiert.Im ersten Teil der vorliegenden Arbeit wird ein Algorithmus vorgestellt, der eine automatische Transkription von Bassgitarrenaufnahmen durchführt. Dabei wird das Audiosignal durch verschiedene Klangereignisse beschrieben, welche den gespielten Noten auf dem Instrument entsprechen. Neben den üblichen Notenparametern Anfang, Dauer, Lautstärke und Tonhöhe werden dabei auch instrumentenspezifische Parameter wie die verwendeten Spieltechniken sowie die Saiten- und Bundlage auf dem Instrument automatisch extrahiert. Evaluationsexperimente anhand zweier neu erstellter Audiodatensätze belegen, dass der vorgestellte Transkriptionsalgorithmus auf einem Datensatz von realistischen Bassgitarrenaufnahmen eine höhere Erkennungsgenauigkeit erreichen kann als drei existierende Algorithmen aus dem Stand der Technik. Die Schätzung der instrumentenspezifischen Parameter kann insbesondere für isolierte Einzelnoten mit einer hohen Güte durchgeführt werden.Im zweiten Teil der Arbeit wird untersucht, wie aus einer Notendarstellung typischer sich wieder- holender Basslinien auf das Musikgenre geschlossen werden kann. Dabei werden Audiomerkmale extrahiert, welche verschiedene tonale, rhythmische, und strukturelle Eigenschaften von Basslinien quantitativ beschreiben. Mit Hilfe eines neu erstellten Datensatzes von 520 typischen Basslinien aus 13 verschiedenen Musikgenres wurden drei verschiedene Ansätze für die automatische Genreklassifikation verglichen. Dabei zeigte sich, dass mit Hilfe eines regelbasierten Klassifikationsverfahrens nur Anhand der Analyse der Basslinie eines Musikstückes bereits eine mittlere Erkennungsrate von 64,8 % erreicht werden konnte.Die Re-synthese der originalen Bassspuren basierend auf den extrahierten Notenparametern wird im dritten Teil der Arbeit untersucht. Dabei wird ein neuer Audiosynthesealgorithmus vorgestellt, der basierend auf dem Prinzip des Physical Modeling verschiedene Aspekte der für die Bassgitarre charakteristische Klangerzeugung wie Saitenanregung, Dämpfung, Kollision zwischen Saite und Bund sowie dem Tonabnehmerverhalten nachbildet. Weiterhin wird ein parametrischerAudiokodierungsansatz diskutiert, der es erlaubt, Bassgitarrenspuren nur anhand der ermittel- ten notenweisen Parameter zu übertragen um sie auf Dekoderseite wieder zu resynthetisieren. Die Ergebnisse mehrerer Hötest belegen, dass der vorgeschlagene Synthesealgorithmus eine Re- Synthese von Bassgitarrenaufnahmen mit einer besseren Klangqualität ermöglicht als die Übertragung der Audiodaten mit existierenden Audiokodierungsverfahren, die auf sehr geringe Bitraten ein gestellt sind.Music recordings most often consist of multiple instrument signals, which overlap in time and frequency. In the field of Music Information Retrieval (MIR), existing algorithms for the automatic transcription and analysis of music recordings aim to extract semantic information from mixed audio signals. In the last years, it was frequently observed that the algorithm performance is limited due to the signal interference and the resulting loss of information. One common approach to solve this problem is to first apply source separation algorithms to isolate the present musical instrument signals before analyzing them individually. The performance of source separation algorithms strongly depends on the number of instruments as well as on the amount of spectral overlap.In this thesis, isolated instrumental tracks are analyzed in order to circumvent the challenges of source separation. Instead, the focus is on the development of instrument-centered signal processing algorithms for music transcription, musical analysis, as well as sound synthesis. The electric bass guitar is chosen as an example instrument. Its sound production principles are closely investigated and considered in the algorithmic design.In the first part of this thesis, an automatic music transcription algorithm for electric bass guitar recordings will be presented. The audio signal is interpreted as a sequence of sound events, which are described by various parameters. In addition to the conventionally used score-level parameters note onset, duration, loudness, and pitch, instrument-specific parameters such as the applied instrument playing techniques and the geometric position on the instrument fretboard will be extracted. Different evaluation experiments confirmed that the proposed transcription algorithm outperformed three state-of-the-art bass transcription algorithms for the transcription of realistic bass guitar recordings. The estimation of the instrument-level parameters works with high accuracy, in particular for isolated note samples.In the second part of the thesis, it will be investigated, whether the sole analysis of the bassline of a music piece allows to automatically classify its music genre. Different score-based audio features will be proposed that allow to quantify tonal, rhythmic, and structural properties of basslines. Based on a novel data set of 520 bassline transcriptions from 13 different music genres, three approaches for music genre classification were compared. A rule-based classification system could achieve a mean class accuracy of 64.8 % by only taking features into account that were extracted from the bassline of a music piece.The re-synthesis of a bass guitar recordings using the previously extracted note parameters will be studied in the third part of this thesis. Based on the physical modeling of string instruments, a novel sound synthesis algorithm tailored to the electric bass guitar will be presented. The algorithm mimics different aspects of the instrument’s sound production mechanism such as string excitement, string damping, string-fret collision, and the influence of the electro-magnetic pickup. Furthermore, a parametric audio coding approach will be discussed that allows to encode and transmit bass guitar tracks with a significantly smaller bit rate than conventional audio coding algorithms do. The results of different listening tests confirmed that a higher perceptual quality can be achieved if the original bass guitar recordings are encoded and re-synthesized using the proposed parametric audio codec instead of being encoded using conventional audio codecs at very low bit rate settings

    Interactive Sound in Performance Ecologies: Studying Connections among Actors and Artifacts

    Get PDF
    This thesis’s primary goal is to investigate performance ecologies, that is the compound of humans, artifacts and environmental elements that contribute to the result of a per- formance. In particular, this thesis focuses on designing new interactive technologies for sound and music. The goal of this thesis leads to the following Research Questions (RQs): • RQ1 How can the design of interactive sonic artifacts support a joint expression across different actors (composers, choreographers, and performers, musicians, and dancers) in a given performance ecology? • RQ2 How does each different actor influence the design of different artifacts, and what impact does this have on the overall artwork? • RQ3 How do the different actors in the same ecology interact, and appropriate an interactive artifact? To reply to these questions, a new framework named ARCAA has been created. In this framework, all the Actors of a given ecology are connected to all the Artifacts throughout three layers: Role, Context and Activity. This framework is then applied to one systematic literature review, two case studies on music performance and one case study in dance performance. The studies help to better understand the shaded roles of composers, per- formers, instrumentalists, dancers, and choreographers, which is relevant to better design interactive technologies for performances. Finally, this thesis proposes a new reflection on the blurred distinction between composing and designing a new instrument in a context that involves a multitude of actors. Overall, this work introduces the following contributions to the field of interaction design applied to music technology: 1) ARCAA, a framework to analyse the set of inter- connected relationship in interactive (music) performances, validated through 2 music studies, 1 dance study and 1 systematic literature analysis; 2) Recommendations for de- signing music interactive system for performance (music or dance), accounting for the needs of the various actors and for the overlapping on music composition and design of in- teractive technology; 3) A taxonomy of how scores have shaped performance ecologies in NIME, based on a systematic analysis of the literature on score in the NIME proceedings; 4) Proposal of a methodological approach combining autobiographical and idiographical design approaches in interactive performances.O objetivo principal desta tese é investigar as ecologias performativas, conjunto formado pelos participantes humanos, artefatos e elementos ambientais que contribuem para o resultado de uma performance. Em particular, esta tese foca-se na conceção de novas tecnologias interativas para som e música. O objetivo desta tese originou as seguintes questões de investigação (Research Questions RQs): • RQ1 Como o design de artefatos sonoros interativos pode apoiar a expressão con- junta entre diferentes atores (compositores, coreógrafos e performers, músicos e dançarinos) numa determinada ecologia performativa? • RQ2 Como cada ator influencia o design de diferentes artefatos e que impacto isso tem no trabalho artístico global? • RQ3 Como os diferentes atores de uma mesma ecologia interagem e se apropriam de um artefato interativo? Para responder a essas perguntas, foi criado uma nova framework chamada ARCAA. Nesta framework, todos os atores (Actores) de uma dada ecologia estão conectados a todos os artefatos (Artefacts) através de três camadas: Role, Context e Activity. Esta framework foi então aplicada a uma revisão sistemática da literatura, a dois estudos de caso sobre performance musical e a um estudo de caso em performance de dança. Estes estudos aju- daram a comprender melhor os papéis desempenhados pelos compositores, intérpretes, instrumentistas, dançarinos e coreógrafos, o que é relevante para melhor projetar as tec- nologias interativas para performances. Por fim, esta tese propõe uma nova reflexão sobre a distinção entre compor e projetar um novo instrumento num contexto que envolve uma multiplicidade de atores. Este trabalho apresenta as seguintes contribuições principais para o campo do design de interação aplicado à tecnologia musical: 1) ARCAA, uma framework para analisar o conjunto de relações interconectadas em performances interativas, validado através de dois estudos de caso relacionados com a música, um estudo de caso relacionado com a dança e uma análise sistemática da literatura; 2) Recomendações para o design de sistemas interativos musicais para performance (música ou dança), tendo em conta as necessidades dos vários atores e a sobreposição entre a composição musical e o design de tecnologia interactiva; 3) Uma taxonomia sobre como as partituras musicais moldaram as ecologias performativas no NIME, com base numa análise sistemática da literatura dos artigos apresentados e publicados nestas conferência; 4) Proposta de uma aborda- gem metodológica combinando abordagens de design autobiográfico e idiográfico em performances interativas
    corecore