3 research outputs found

    Liquid Tab

    Get PDF
    Guitar transcription is a complex task requiring significant time, skill, and musical knowledge to achieve accurate results. Since most music is recorded and processed digitally, it would seem like many tools to digitally analyze and transcribe the audio would be available. However, the problem of automatic transcription presents many more difficulties than are initially evident. There are multiple ways to play a guitar, many diverse styles of playing, and every guitar sounds different. These problems become even more difficult considering the varying qualities of recordings and levels of background noise. Machine learning has proven itself to be a flexible tool capable of generating accurate results in a variety of situations. To harness these benefits, a good program needs quality data and a model well suited for the task. The most promising models for automatic guitar transcription so far have been convolutional neural networks. These models are adequate, but they lack temporal context. A Liquid Time-constant Network is a type of recurrent neural network, and therefore it retains a temporal state. By combining these approaches, the resulting model should prove itself as a flexible tool adept to many situations and playing styles

    Wavelet Transformation and Spectral Subtraction Method in Performing Automated Rindik Song Transcription

    Get PDF
    Rindik is Balinese traditional music consisting of bamboo rods arranged horizontally and played by hitting the rods with a mallet-like tool called "panggul". In this study, the transcription of Rindik's music songs was carried out automatically using the Wavelet transformation method and spectral subtraction. Spectral subtraction method is used with iterative estimation and separation approaches. While the Wavelet transformation method is used by matching the segment Wavelet results with the Wavelet result references in the dataset. The results of the transcription were also synthesized again using the concatenative synthesis method. The data used is the hit of 1 Rindik rod and a combination of 2 Rindik rods that are hit simultaneously, and for testing the system, 4 Rindik songs are used. Each data was recorded 3 times. Several parameters are used for the Wavelet transformation method and spectral subtraction, which are the length of the frame for the Wavelet transformation method and the tolerance interval for frequency difference in spectral subtraction method. The test is done by measuring the accuracy of the transcription from the system within all Rindik song data. As a result, the Wavelet transformation method produces an average accuracy of 83.42% and the spectral subtraction method produces an average accuracy of 78.51% in transcription of Rindik songs
    corecore