35 research outputs found

    A holistic approach for image-to-graph: application to optical music recognition

    Get PDF
    A number of applications would benefit from neural approaches that are capable of generating graphs from images in an end-to-end fashion. One of these fields is optical music recognition (OMR), which focuses on the computational reading of music notation from document images. Given that music notation can be expressed as a graph, the aforementioned approach represents a promising solution for OMR. In this work, we propose a new neural architecture that retrieves a certain representation of a graph—identified by a specific order of its vertices—in an end-to-end manner. This architecture works by means of a double output: It sequentially predicts the possible categories of the vertices, along with the edges between each of their pairs. The experiments carried out prove the effectiveness of our proposal as regards retrieving graph structures from excerpts of handwritten musical notation. Our results also show that certain design decisions, such as the choice of graph representations, play a fundamental role in the performance of this approach.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Work produced with the support of a 2021 Leonardo Grant for Researchers and Cultural Creators, BBVA Foundation. The Foundation takes no responsibility for the opinions, statements and contents of this project, which are entirely the responsibility of its authors. The second author is supported by grant ACIF/2021/356 from the “Programa I+D+i de la Generalitat Valenciana”

    End-to-end optical music recognition for pianoform sheet music

    Get PDF
    End-to-end solutions have brought about significant advances in the field of Optical Music Recognition. These approaches directly provide the symbolic representation of a given image of a musical score. Despite this, several documents, such as pianoform musical scores, cannot yet benefit from these solutions since their structural complexity does not allow their effective transcription. This paper presents a neural method whose objective is to transcribe these musical scores in an end-to-end fashion. We also introduce the GrandStaff dataset, which contains 53,882 single-system piano scores in common western modern notation. The sources are encoded in both a standard digital music representation and its adaptation for current transcription technologies. The method proposed in this paper is trained and evaluated using this dataset. The results show that the approach presented is, for the first time, able to effectively transcribe pianoform notation in an end-to-end manner.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This paper is part of the MultiScore project (PID2020-118447RA-I00), funded by MCIN/AEI/10.13039/501100011033. The first author is supported by Grant ACIF/2021/356 from the “Programa I+D+i de la Generalitat Valenciana.

    End-to-End Page-Level Assessment of Handwritten Text Recognition

    Get PDF
    The evaluation of Handwritten Text Recognition (HTR) systems has traditionally used metrics based on the edit distance between HTR and ground truth (GT) transcripts, at both the character and word levels. This is very adequate when the experimental protocol assumes that both GT and HTR text lines are the same, which allows edit distances to be independently computed to each given line. Driven by recent advances in pattern recognition, HTR systems increasingly face the end-to-end page-level transcription of a document, where the precision of locating the different text lines and their corresponding reading order (RO) play a key role. In such a case, the standard metrics do not take into account the inconsistencies that might appear. In this paper, the problem of evaluating HTR systems at the page level is introduced in detail. We analyse the convenience of using a two-fold evaluation, where the transcription accuracy and the RO goodness are considered separately. Different alternatives are proposed, analysed and empirically compared both through partially simulated and through real, full end-to-end experiments. Results support the validity of the proposed two-fold evaluation approach. An important conclusion is that such an evaluation can be adequately achieved by just two simple and well-known metrics: the Word Error Rate (WER), that takes transcription sequentiality into account, and the here re-formulated Bag of Words Word Error Rate (bWER), that ignores order. While the latter directly and very accurately assess intrinsic word recognition errors, the difference between both metrics (ΔWER) gracefully correlates with the Normalised Spearman’s Foot Rule Distance (NSFD), a metric which explicitly measures RO errors associated with layout analysis flaws. To arrive to these conclusions, we have introduced another metric called Hungarian Word Word Rate (hWER), based on a here proposed regularised version of the Hungarian Algorithm. This metric is shown to be always almost identical to bWER and both bWER and hWER are also almost identical to WER whenever HTR transcripts and GT references are guarantee to be in the same RO.This paper is part of the I+D+i projects: PID2020-118447RA-I00 (MultiScore) and PID2020-116813RB-I00a (SimancasSearch), funded by MCIN/AEI/10.13039/501100011033. The first author research was developed in part with the Valencian Graduate School and Research Network of Artificial Intelligence (valgrAI, co-funded by Generalitat Valenciana and the European Union). The second author is supported by a María Zambrano grant from the Spanish Ministerio de Universidades and the European Union NextGenerationEU/PRTR. The third author is supported by grant ACIF/2021/356 from the “Programa I+D+i de la Generalitat Valenciana”

    Region-based layout analysis of music score images

    Get PDF
    The Layout Analysis (LA) stage is of vital importance to the correct performance of an Optical Music Recognition (OMR) system. It identifies the regions of interest, such as staves or lyrics, which must then be processed in order to transcribe their content. Despite the existence of modern approaches based on deep learning, an exhaustive study of LA in OMR has not yet been carried out with regard to the performance of different models, their generalization to different domains or, more importantly, their impact on subsequent stages of the pipeline. This work focuses on filling this gap in the literature by means of an experimental study of different neural architectures, music document types, and evaluation scenarios. The need for training data has also led to a proposal for a new semi-synthetic data-generation technique that enables the efficient applicability of LA approaches in real scenarios. Our results show that: (i) the choice of the model and its performance are crucial for the entire transcription process; (ii) the metrics commonly used to evaluate the LA stage do not always correlate with the final performance of the OMR system, and (iii) the proposed data-generation technique enables state-of-the-art results to be achieved with a limited set of labeled data.This paper is part of the I+D+i PID2020-118447RA-I00 (MultiScore) project funded by MCIN/AEI/10.13039/501100011033, Spain and the GV/2020/030, Spain project funded by the Generalitat Valenciana, Spain. The first and third authors acknowledge support from the “Programa I+D+i de la Generalitat Valenciana, Spain ” through grants ACIF/2019/042 and ACIF/2021/356, respectively

    Few-Shot Symbol Classification via Self-Supervised Learning and Nearest Neighbor

    Get PDF
    The recognition of symbols within document images is one of the most relevant steps involved in the Document Analysis field. While current state-of-the-art methods based on Deep Learning are capable of adequately performing this task, they generally require a vast amount of data that has to be manually labeled. In this paper, we propose a self-supervised learning-based method that addresses this task by training a neural-based feature extractor with a set of unlabeled documents and performs the recognition task considering just a few reference samples. Experiments on different corpora comprising music, text, and symbol documents report that the proposal is capable of adequately tackling the task with high accuracy rates of up to 95% in few-shot settings. Moreover, results show that the presented strategy outperforms the base supervised learning approaches trained with the same amount of data that, in some cases, even fail to converge. This approach, hence, stands as a lightweight alternative to deal with symbol classification with few annotated data.This paper is part of the project I+D+i PID2020-118447RA-I00 (MultiScore), funded by MCIN/AEI/10.13039/501100011033. The first author is supported by grant FPU19/04957 from the Spanish Ministerio de Universidades. The second and third authors are respectively supported by grants ACIF/2021/356 and APOSTD/2020/256 from “Programa I+D+i de la Generalitat Valenciana”

    Retrieving Music Semantics from Optical Music Recognition by Machine Translation

    Get PDF
    In this paper, we apply machine translation techniques to solve one of the central problems in the field of optical music recognition: extracting the semantics of a sequence of music characters. So far, this problem has been approached through heuristics and grammars, which are not generalizable solutions. We borrowed the seq2seq model and the attention mechanism from machine translation to address this issue. Given its example-based learning, the model proposed is meant to apply to different notations provided there is enough training data. The model was tested on the PrIMuS dataset of common Western music notation incipits. Its performance was satisfactory for the vast majority of examples, flawlessly extracting the musical meaning of 85% of the incipits in the test set—mapping correctly series of accidentals into key signatures, pairs of digits into time signatures, combinations of digits and rests into multi-measure rests, detecting implicit accidentals, etc.This work is supported by the Spanish Ministry HISPAMUS project TIN2017-86576-R, partially funded by the EU, and by CIRMMT’s Inter-Centre Research Exchange Funding and McGill’s Graduate Mobility Award

    Decoupling music notation to improve end-to-end Optical Music Recognition

    Get PDF
    Inspired by the Text Recognition field, end-to-end schemes based on Convolutional Recurrent Neural Networks (CRNN) trained with the Connectionist Temporal Classification (CTC) loss function are considered one of the current state-of-the-art techniques for staff-level Optical Music Recognition (OMR). Unlike text symbols, music-notation elements may be defined as a combination of (i) a shape primitive located in (ii) a certain position in a staff. However, this double nature is generally neglected in the learning process, as each combination is treated as a single token. In this work, we study whether exploiting such particularity of music notation actually benefits the recognition performance and, if so, which approach is the most appropriate. For that, we thoroughly review existing specific approaches that explore this premise and propose different combinations of them. Furthermore, considering the limitations observed in such approaches, a novel decoding strategy specifically designed for OMR is proposed. The results obtained with four different corpora of historical manuscripts show the relevance of leveraging this double nature of music notation since it outperforms the standard approaches where it is ignored. In addition, the proposed decoding leads to significant reductions in the error rates with respect to the other cases.This paper is part of the project I+D+i PID2020-118447RA-I00 (MultiScore), funded by MCIN/AEI/10.13039/501100011033. The first author is supported by grant FPU19/04957 from the Spanish Ministerio de Universidades. The second author is supported by grant ACIF/2021/356 from “Programa I+D+i de la Generalitat Valenciana“. The third author is supported by grant APOSTD/2020/256 from “Programa I+D+i de la Generalitat Valenciana”

    Applying Automatic Translation for Optical Music Recognition’s Encoding Step

    Get PDF
    Optical music recognition is a research field whose efforts have been mainly focused, due to the difficulties involved in its processes, on document and image recognition. However, there is a final step after the recognition phase that has not been properly addressed or discussed, and which is relevant to obtaining a standard digital score from the recognition process: the step of encoding data into a standard file format. In this paper, we address this task by proposing and evaluating the feasibility of using machine translation techniques, using statistical approaches and neural systems, to automatically convert the results of graphical encoding recognition into a standard semantic format, which can be exported as a digital score. We also discuss the implications, challenges and details to be taken into account when applying machine translation techniques to music languages, which are very different from natural human languages. This needs to be addressed prior to performing experiments and has not been reported in previous works. We also describe and detail experimental results, and conclude that applying machine translation techniques is a suitable solution for this task, as they have proven to obtain robust results.This work was supported by the Spanish Ministry HISPAMUS project TIN2017-86576-R, partially funded by the EU, and by the Generalitat Valenciana through project GV/2020/030

    Retrieving Music Semantics from Optical Music Recognition by Machine Translation

    Get PDF
    In this paper, we apply machine translation techniques to solve one of the central problems in the field of optical music recognition: extracting the semantics of a sequence of music characters. So far, this problem has been approached through heuristics and grammars, which are not generalizable solutions. We borrowed the seq2seq model and the attention mechanism from machine translation to address this issue. Given its example-based learning, the model proposed is meant to apply to different notations provided there is enough training data. The model was tested on the PrIMuS dataset of common Western music notation incipits. Its performance was satisfactory for the vast majority of examples, flawlessly extracting the musical meaning of 85% of the incipits in the test set—mapping correctly series of accidentals into key signatures, pairs of digits into time signatures, combinations of digits and rests into multi-measure rests, detecting implicit accidentals, etc

    ¿La evaluación evoluciona? Una experiencia de coevaluación en ABP

    Get PDF
    Desde hace 8 años, el grado de Ingeniería Multimedia imparte el 4º curso utilizando Aprendizaje Basado en Proyectos (ABP) e integrando a todas las asignaturas en él. El programa ha tenido mucho éxito, pero siempre ha acusado un problema, la evaluación. Al tratarse de un trabajo en grupo altamente colaborativo, que integra a todas las asignaturas del curso y que durante los dos semestres trabaja en un único gran proyecto, es muy difícil discernir el trabajo real realizado por cada componente del equipo, produciéndose descompensaciones o incluso malas prácticas. Durante años, se ha tratado de adoptar medidas para paliar esta situación, pero persisten algunas disfunciones. Para solucionarlo, hemos diseñado e implantado una metodología de coevaluación, alineada con la gestión del programa ABP y que persigue dos objetivos: procurar un reparto de nota asociado con el esfuerzo individual realizado, basado en información objetiva y cuantificable, junto con una evaluación formativa y sumativa justa; y desarrollar las habilidades blandas imprescindibles hoy en día en entornos de trabajo colaborativos. En este artículo presentamos la herramienta de coevaluación desarrollada, la valoración realizada sobre la herramienta por las/os participantes en la experiencia y exalumnos de años anteriores, y los resultados obtenidos hasta la fecha.For the last 8 years, the Multimedia Engineering degree has been teaching the 4th year using Project Based Learning (PBL) and integrating all the subjects in it. The program has been very successful, but there has always been one problem: assessment. As it is a highly collaborative group work, integrating all the subjects of the course and working on a single large project during the two semesters, it is very difficult to discern the real work done by each member, causing imbalances or even malpractice. For years, we have tried to adopt measures to mitigate this situation, but without any positive results. To solve this problem, we have designed and implemented a co-evaluation methodology, aligned with the PBL program management, which has two objectives: to ensure a distribution of points associated with the individual effort, based on objective and measurable information, together with a fair formative and summative evaluation; and to develop the soft skills that are essential in current collaborative work environments. In this article we present the co-evaluation tool developed, the assessment made of the tool by the participants in the experience and alumni from previous years, and the results obtained to date.El presente trabajo ha contado con una ayuda del Programa de Redes de investigación en docencia universitaria del Instituto de Ciencias de la Educación de la Universidad de Alicante (convocatoria 2021-22). Ref.: 5490, Diseño y desarrollo de una metodología y plataforma TIC para coevaluación en ABP
    corecore