83 research outputs found

    Encoding Polyphony from Medieval Manuscripts Notated in Mensural Notation

    Get PDF
    This panel submission for the 2021 Music Encoding Conference brings together five short papers that focus on the making of computer-readable encodings of polyphony in the notational style – mensural notation – in which it was originally copied. Mensural notation was used in the medieval West to encode polyphony from the late thirteenth to sixteenth centuries. The Measuring Polyphony (MP) Online Editor, funded by an NEH Digital Humanities Advancement Grant, is a software that enables non-technical users to make Humdrum and MEI encodings of mensural notation, and links these encodings to digital images of the manuscripts in which these compositions were first notated. Topics explored by the authors include: the processes of, and the goals informing, the linking of manuscript images to music encodings; choices and compromises made in the development process of the MP Editor in order to facilitate its rapid deployment; and the implications of capturing dual encodings – a parts-based encoding that reflects the layout of the original source, and a score-based encoding. Having two encodings of the music data is useful for a variety of activities, including performance and analysis, but also within the editorial process, and for sharing data with other applications. The authors present two case studies that document the possibilities and potential in the interchange of music data between the MP Editor and other applications, specifically, MuRET, an optical music recognition (OMR) tool, and Humdrum analysis tools

    White Mensural Manual Encoding: from Humdrum to MEI

    Get PDF
    The recovery of musical heritage currently necessarily involves its digitalization, not only by scanning images, but also by the encoding in computer-readable formats of the musical content described in the original manuscripts. In general, this encoding can be done using automated tools based with what is named Optical Music Recognition (OMR), or manually writing directly the corresponding computer code. The OMR technology is not mature enough yet to extract the musical content of sheet music images with enough quality, and even less from handwritten sources, so in many cases it is more efficient to encode the works manually. However, being currently MEI (Music Encoding Initiative) the most appropriate format to store the encoding, it is a totally tedious code to be manually written. Therefore, we propose a new format named **mens allowing a quick manual encoding, from which both the MEI format itself and other common representations such as Lilypond or the transcription in MusicXML can be generated. By using this approach, the antiphony Salve Regina for eight-voice choir written by Jerónimo de la Torre (1607–1673) has been successfully encoded and transcribed

    Codificación manual mensural: de Humdrum a MEI

    Get PDF
    The recovery of musical heritage currently necessarily involves its digitalization, not only by scanning images, but also by the encoding in computer-readable formats of the musical content described in the original manuscripts. In general, this encoding can be done using automated tools based with what is named Optical Music Recognition (OMR), or manually writing directly the corresponding computer code. The OMR technology is not mature enough yet to extract the musical content of sheet music images with enough quality, and even less from handwritten sources, so in many cases it is more efficient to encode the works manually. However, being currently MEI (Music Encoding Initiative) the most appropriate format to store the encoding, it is a totally tedious code to be manually written. Therefore, we propose a new format named **mens allowing a quick manual encoding, from which both the MEI format itself and other common representations such as Lilypond or the transcription in MusicXML can be generated. By using this approach, the antiphony Salve Regina for eight-voice choir written by Jerónimo de la Torre (1607–1673) has been successfully encoded and transcribed.La recuperación del patrimonio musical en el momento actual pasa necesariamente por su digitalización, no sólo mediante la obtención de imágenes digitales, sino también por la codificación en formatos legibles por un ordenador del contenido musical descrito en los manuscritos originales. En general, esa codificación se puede realizar mediante herramientas automatizadas basadas en lo que se denomina Reconocimiento Óptico de Música (OMR por sus iniciales en inglés), o manualmente escribiendo directamente el código informático pertinente. La tecnología OMR todavía no está lo suficientemente madura como para extraer con buena calidad el contenido musical de imágenes de partituras, y aún menos de fuentes manuscritas, por lo que en muchos casos es más eficiente codificar las obras manualmente. Sin embargo, es totalmente tediosa la escritura en el formato más adecuado para realizar esa codificación actualmente, MEI (Music Encoding Initiative). Por ello proponemos un nuevo formato denominado **mens que permite escribir manualmente el código de manera rápida y sencilla, a partir del cual, además, somos capaces de generar tanto el propio formato MEI como otras representaciones habituales como son Lilypond o la transcripción en MusicXML. Usando este enfoque, se ha codificado y transcrito la antífona a ocho voces Salve Regina escrita por Jerónimo de la Torre (1607–1673).This work was supported by the Spanish Ministerio de Economía y Competitividad through Project HISPAMUS Ref. No. TIN2017-86576-R (supported by UE FEDER funds), and partially by the ISEA.CV 2017/2018 research grants

    The Relevance of Digital Humanities to the Analysis of late Medieval/Early Renaissance Music

    Get PDF
    In a seminal publication on computational and comparative musicology, Nicholas Cook argued more than a decade ago that recent developments in computational musicology presented a significant opportunity for disciplinary renewal. Musicology, he said, was on the brink of new phase wherein “objective representations of music” could be rapidly and accurately compared and analysed using computers. Cook’s largely retrospective conspectus of what I and others now call digital musicology— following the vogue of digital humanities—might seem prophetical, yet in other ways it cannot be faulted for missing its mark when it came to developments in the following decade. While Cook laid the blame for its delayed advent on the cultural turn in musicology, digital musicology today—which is more a way of enhancing musicological research than a particular approach in its own right—is on the brink of another revolution of sorts that promises to bring diverse disciplinary branches closer together. In addition to the extension of types of computer-assisted analysis already familiar to Cook, new generic models of data capable of linking music, image (including digitisations of music notation), sound and documentation are poised to leverage musicology into the age of the semantic World Wide Web. At the same time, advanced forms of computer modelling are being developed that simulate historical modes of listening and improvisation, thereby beginning to address research questions relevant to current debates in music cognition, music psychology and cultural studies, and musical creativity in the Middle Ages, Renaissance and beyond

    Music Encoding Conference Proceedings

    Get PDF
    UIDB/00693/2020 UIDP/00693/2020publishersversionpublishe

    Music Encoding Conference Proceedings 2021, 19–22 July, 2021 University of Alicante (Spain): Onsite & Online

    Get PDF
    Este documento incluye los artículos y pósters presentados en el Music Encoding Conference 2021 realizado en Alicante entre el 19 y el 22 de julio de 2022.Funded by project Multiscore, MCIN/AEI/10.13039/50110001103

    09051 Abstracts Collection -- Knowledge representation for intelligent music processing

    Get PDF
    From the twenty-fifth to the thirtieth of January, 2009, the Dagstuhl Seminar 09051 on ``Knowledge representation for intelligent music processing\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Centre for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations and demos given during the seminar as well as plenary presentations, reports of workshop discussions, results and ideas are put together in this paper. The first section describes the seminar topics and goals in general, followed by plenary `stimulus\u27 papers, followed by reports and abstracts arranged by workshop followed finally by some concluding materials providing views of both the seminar itself and also forward to the longer-term goals of the discipline. Links to extended abstracts, full papers and supporting materials are provided, if available. The organisers thank David Lewis for editing these proceedings

    Automatic score-to-score music generation

    Get PDF
    Music generation is the task of generating music using a model or algorithm. There are multiple ways of achieving this task as there are multiple types of data to rep-resent music. Music generation can be audio-based or with symbolic music such as MIDI data. Approaches with symbolic music have been successful, especially using note-level representation such as the MIDI format. However, there is an absence of a baseline dataset tailored specifically for music scores generation using notation-level representations. In this thesis, we first construct a dataset specifically for the training and the evaluation of music generation models, then we build an automatic score-to-score generation model to generate scores. This research not only expands the horizons of music score generation but also establishes a solid foundation for future innovations in the field with a dataset made for score-to-score music genera-tion

    Applying Automatic Translation for Optical Music Recognition’s Encoding Step

    Get PDF
    Optical music recognition is a research field whose efforts have been mainly focused, due to the difficulties involved in its processes, on document and image recognition. However, there is a final step after the recognition phase that has not been properly addressed or discussed, and which is relevant to obtaining a standard digital score from the recognition process: the step of encoding data into a standard file format. In this paper, we address this task by proposing and evaluating the feasibility of using machine translation techniques, using statistical approaches and neural systems, to automatically convert the results of graphical encoding recognition into a standard semantic format, which can be exported as a digital score. We also discuss the implications, challenges and details to be taken into account when applying machine translation techniques to music languages, which are very different from natural human languages. This needs to be addressed prior to performing experiments and has not been reported in previous works. We also describe and detail experimental results, and conclude that applying machine translation techniques is a suitable solution for this task, as they have proven to obtain robust results.This work was supported by the Spanish Ministry HISPAMUS project TIN2017-86576-R, partially funded by the EU, and by the Generalitat Valenciana through project GV/2020/030
    corecore