16 research outputs found

    Deep watershed detector for music object recognition

    Get PDF
    Optical Music Recognition (OMR) is an important and challenging area within music information retrieval, the accurate detection of music symbols in digital images is a core functionality of any OMR pipeline. In this paper, we introduce a novel object detection method, based on synthetic energy maps and the watershed transform, called Deep Watershed Detector (DWD). Our method is specifically tailored to deal with high resolution images that contain a large number of very small objects and is therefore able to process full pages of written music. We present state-of-the-art detection results of common music symbols and show DWD’s ability to work with synthetic scores equally well as on handwritten music

    Deep watershed detector for music object recognition

    Get PDF
    Optical Music Recognition (OMR) is an important and challenging area within music information retrieval, the accurate detection of music symbols in digital images is a core functionality of any OMR pipeline. In this paper, we introduce a novel object detection method, based on synthetic energy maps and the watershed transform, called Deep Watershed Detector (DWD). Our method is specifically tailored to deal with high resolution images that contain a large number of very small objects and is therefore able to process full pages of written music. We present state-of-the-art detection results of common music symbols and show DWD’s ability to work with synthetic scores equally well as on handwritten music

    Building a Comprehensive Sheet Music Library Application

    Get PDF
    Digital symbolic music scores offer many benefits compared to paper-based scores, such as a flexible dynamic layout that allows adjustments of size and style, intelligent navigation features, automatic page-turning, on-the-fly modifications of the score including transposition into a different key, and rule-based annotations that can save hours of manual work by automatically highlighting relevant aspects in the score. However, most musicians still rely on paper because they don’t have access to a digital version of their sheet music, or their digital solution does not provide a satisfying experience. To bring digital scores to millions of musicians, we at Enote are building a mobile application that offers a comprehensive digital library of sheet music. These scores are obtained by a large-scale Optical Music Recognition process, combined with metadata collection and curation. Our material is stored in the MEI format and we rely on Verovio as a central component of our app to present scores and parts dynamically on mobile devices. This combination of the expressiveness of MEI with the beautiful engraving of Verovio allows us to create a flexible, mobile solution that we believe to be a powerful and true alternative to paper scores with practical features like smart annotations or instant transpositions. We also invest heavily into the open-source development of Verovio to make it the gold standard for rendering beautiful digital sheet music

    Deep learning in the wild

    Get PDF
    Invited paperDeep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice

    The DeepScoresV2 dataset and benchmark for music object detection

    Get PDF
    The dataset, code and pre-trained models, as well as user instructions, are publicly available at https://zenodo.org/record/4012193.In this paper, we present DeepScoresV2, an extended version of the DeepScores dataset for optical music recognition (OMR). We improve upon the original DeepScores dataset by providing much more detailed annotations, namely (a) annotations for 135 classes including fundamental symbols of non-fixed size and shape, increasing the number of annotated symbols by 23%; (b) oriented bounding boxes; (c) higher-level rhythm and pitch information (onset beat for all symbols and line position for noteheads); and (d) a compatibility mode for easy use in conjunction with the MUSCIMA++ dataset for OMR on handwritten documents. These additions open up the potential for future advancement in OMR research. Additionally, we release two state-of-the-art baselines for DeepScoresV2 based on Faster R-CNN and the Deep Watershed Detector. An analysis of the baselines shows that regular orthogonal bounding boxes are unsuitable for objects which are long, small, and potentially rotated, such as ties and beams, which demonstrates the need for detection algorithms that naturally incorporate object angles

    Sistema de reconocimiento de partituras musicales y generación de archivos sonoros

    Get PDF
    El OMR o Reconocimiento Óptico de Música es una tecnología utilizada para el reconocimiento de partituras musicales a partir de imágenes para posteriormente procesarlas y crear un archivo de salida en formato de texto. Mi objetivo es utilizar un modelo ya utilizado en otros campos y adaptarlo para que la salida final sea un archivo de sonido reproducible sin pasos intermedios. Para ello, en este proyecto se utiliza un modelo Sequence to Sequence para generar a partir de una imagen de una partitura su correspondiente fichero de audio que posteriormente podrá ser tratado o editado. Los modelos Sequence to Sequence son un tipo de arquitectura de deep learning que han resultado dar muy buenos resultados en aplicaciones con reconocimiento de voz, traducción automática o descripción de videos entre muchos otros.The OMR or Optical Music Recognition is a technology used for the recognition of musical sheets from images to later process them and create an output file in text format. My objective is to use a model that has been already used in other fields and adapt it to make the output file into a reproducible sound archive with no intermediate steps. To achieve this, in this project it has been used a Sequence to Sequence model to generate, from a musical sheet image, a musical sheet audiofile which can be edited later. The Sequence to Sequence models are a type of deep learning architecture that give great results in applications such as voice recognition, automatic translation, or video description among others.El OMR o Reconeixement Òptic de Música és una tecnologia utilitzada per al reconeixement de partitures musicals a partir d'imatges per posteriorment processar-les i crear un arxiu de sortida en format de text. El meu objectiu és utilitzar un model ja utilitzat en altres camps i adaptar-lo perquè la sortida final sigui un arxiu de so reproduïble sense passos intermedis. Per a això, en aquest projecte s'utilitza un model Sequence to Sequence per generar a partir d'una imatge d'una partitura seva corresponent fitxer d'àudio que posteriorment podrà ser tractat o editat. Els models Sequence to Sequence són un tipus d'arquitectura de deep learning que han resultat donar molt bons resultats en aplicacions amb reconeixement de veu, traducció automàtica o descripció de vídeos entre molts altres
    corecore