6 research outputs found

    Joint multi-pitch detection and score transcription for polyphonic piano music

    Get PDF
    Research on automatic music transcription has largely focused on multi-pitch detection; there is limited discussion on how to obtain a machine- or human-readable score transcription. In this paper, we propose a method for joint multi-pitch detection and score transcription for polyphonic piano music. The outputs of our system include both a piano-roll representation (a descriptive transcription) and a symbolic musical notation (a prescriptive transcription). Unlike traditional methods that further convert MIDI transcriptions into musical scores, we use a multitask model combined with a Convolutional Recurrent Neural Network and Sequence-to-sequence models with attention mechanisms. We propose a Reshaped score representation that outperforms a LilyPond representation in terms of both prediction accuracy and time/memory resources, and compare different input audio spectrograms. We also create a new synthesized dataset for score transcription research. Experimental results show that the joint model outperforms a single-task model in score transcription

    Performance MIDI-to-score conversion by neural beat tracking

    No full text
    Rhythm quantisation is an essential part of converting performance MIDI recordings into musical scores. Previous works on rhythm quantisation are limited to the use of probabilistic or statistical methods. In this paper, we propose a MIDI-to-score quantisation method using a convolutional-recurrent neural network (CRNN) trained on MIDI note sequences to predict whether notes are on beats. Then, we expand the CRNN model to predict the quantised times for all beat and non-beat notes. Furthermore, we enable the model to predict the key signatures, time signatures, and hand parts of all notes. Our proposed performance MIDI-to-score system achieves significantly better performance compared to commercial software evaluated on the MV2H metric. We release the toolbox for converting performance MIDI into MIDI scores at: https://github.com/cheriell/PM2

    Social networks and member participation in cooperative governance

    No full text
    This study explored the relations between the farmer-members' social networks and their interest in cooperative governance, specifically their willingness to be elected representatives. Several researchers assert that member interest in cooperative governance is related to social factors. The empirical basis consists of surveys of random samples of Swedish farmers conducted in 1993, 2003, and 2013. The results indicate a strong relationship between the social networks and the farmers' propensity to participate in cooperative governance. This relationship has persisted even though the investigated 20-year period was very turbulent for Swedish agriculture. Over time, members have become more willing to be elected when they receive backing from their social networks, with personal networks being more important than professional networks. The professional networks are related only to the level of aspiration and not actual participation in governance. [EconLit Citations: D73, P13, Q13]

    Few-shot bioacoustic event detection at the DCASE 2022 challenge

    No full text
    International audienceFew-shot sound event detection is the task of detecting sound events, despite having only a few labelled examples of the class of interest. This framework is particularly useful in bioacoustics, where often there is a need to annotate very long recordings but the expert annotator time is limited. This paper presents an overview of the second edition of the few-shot bioacoustic sound event detection task included in the DCASE 2022 challenge. A detailed description of the task objectives, dataset, and baselines is presented, together with the main results obtained and characteristics of the submitted systems. This task received submissions from 15 different teams from which 13 scored higher than the baselines. The highest Fscore was of 60.2% on the evaluation set, which leads to a huge improvement over last year's edition. Highly-performing methods made use of prototypical networks, transductive learning, and addressed the variable length of events from all target classes. Furthermore, by analysing results on each of the subsets we can identify the main difficulties that the systems face, and conclude that few-show bioacoustic sound event detection remains an open challenge
    corecore