11 research outputs found

    Codificación y publicación de música: Un taller sobre Music Encoding Initiative (MEI)

    Get PDF
    This is a set of slides presented during the Semana de Humanidades Digitales (Week of Digital Humanities) organized by Digital Humanities associations in Mexico (Red HD), Colombia (Red Colombiana de Humanidades Digitales), and Argentina (Asociación Argentina de Humanidades Digitales). An MEI template (template_more_elements.xml) for use with the presentation can be found at https://doi.org/10.17613/dmtt-8n57

    Encoding Mensural Notation with MEI

    Get PDF
    This set of slides was used both in a workshop, Digital Humanities in Early Music Research I, Session II – Early Music Databases and Encoding and in a Paleography course at University of Freiburg. They present the following topics: (1) A short introduction to MEI, (2) Basic Structure of an MEI file, (3) Examples, (4) MEI Technologies - Editors and Viewers, (5) Mensural notation in MEI, (5) Hands-on encoding example. MEI template file for use with the slides can be accessed at https://doi.org/10.17613/m5yr-xb87

    Retrieving Music Semantics from Optical Music Recognition by Machine Translation

    Get PDF
    In this paper, we apply machine translation techniques to solve one of the central problems in the field of optical music recognition: extracting the semantics of a sequence of music characters. So far, this problem has been approached through heuristics and grammars, which are not generalizable solutions. We borrowed the seq2seq model and the attention mechanism from machine translation to address this issue. Given its example-based learning, the model proposed is meant to apply to different notations provided there is enough training data. The model was tested on the PrIMuS dataset of common Western music notation incipits. Its performance was satisfactory for the vast majority of examples, flawlessly extracting the musical meaning of 85% of the incipits in the test set—mapping correctly series of accidentals into key signatures, pairs of digits into time signatures, combinations of digits and rests into multi-measure rests, detecting implicit accidentals, etc.This work is supported by the Spanish Ministry HISPAMUS project TIN2017-86576-R, partially funded by the EU, and by CIRMMT’s Inter-Centre Research Exchange Funding and McGill’s Graduate Mobility Award

    Encoding Polyphony from Medieval Manuscripts Notated in Mensural Notation

    Get PDF
    This panel submission for the 2021 Music Encoding Conference brings together five short papers that focus on the making of computer-readable encodings of polyphony in the notational style – mensural notation – in which it was originally copied. Mensural notation was used in the medieval West to encode polyphony from the late thirteenth to sixteenth centuries. The Measuring Polyphony (MP) Online Editor, funded by an NEH Digital Humanities Advancement Grant, is a software that enables non-technical users to make Humdrum and MEI encodings of mensural notation, and links these encodings to digital images of the manuscripts in which these compositions were first notated. Topics explored by the authors include: the processes of, and the goals informing, the linking of manuscript images to music encodings; choices and compromises made in the development process of the MP Editor in order to facilitate its rapid deployment; and the implications of capturing dual encodings – a parts-based encoding that reflects the layout of the original source, and a score-based encoding. Having two encodings of the music data is useful for a variety of activities, including performance and analysis, but also within the editorial process, and for sharing data with other applications. The authors present two case studies that document the possibilities and potential in the interchange of music data between the MP Editor and other applications, specifically, MuRET, an optical music recognition (OMR) tool, and Humdrum analysis tools

    Retrieving Music Semantics from Optical Music Recognition by Machine Translation

    Get PDF
    In this paper, we apply machine translation techniques to solve one of the central problems in the field of optical music recognition: extracting the semantics of a sequence of music characters. So far, this problem has been approached through heuristics and grammars, which are not generalizable solutions. We borrowed the seq2seq model and the attention mechanism from machine translation to address this issue. Given its example-based learning, the model proposed is meant to apply to different notations provided there is enough training data. The model was tested on the PrIMuS dataset of common Western music notation incipits. Its performance was satisfactory for the vast majority of examples, flawlessly extracting the musical meaning of 85% of the incipits in the test set—mapping correctly series of accidentals into key signatures, pairs of digits into time signatures, combinations of digits and rests into multi-measure rests, detecting implicit accidentals, etc

    Mensural MEI Template

    No full text
    Mensural MEI template for use with Encoding Mensural Notation with MEI slides. The slides are available at https://doi.org/10.17613/zqwr-xm82

    MEI Template for Codificación y publicación de música

    No full text
    MEI template for use with the presentation, Codificación y publicación de música: Un taller sobre Music Encoding Initiative (MEI). Presentation slides can be accessed at https://doi.org/10.17613/fqbk-nz31

    Taking Digital Humanities to Guatemala, a Case Study in the Preservation of Colonial Musical Heritage

    No full text
    Abstract of paper 0664 presented at the Digital Humanities Conference 2019 (DH2019), Utrecht , the Netherlands 9-12 July, 2019

    Encoding Polyphony from Medieval Manuscripts Notated in Mensural Notation

    Get PDF
    This panel submission for the 2021 Music Encoding Conference brings together five short papers that focus on the making of computer-readable encodings of polyphony in the notational style – mensural notation – in which it was originally copied. Mensural notation was used in the medieval West to encode polyphony from the late thirteenth to sixteenth centuries. The Measuring Polyphony (MP) Online Editor, funded by an NEH Digital Humanities Advancement Grant, is a software that enables non-technical users to make Humdrum and MEI encodings of mensural notation, and links these encodings to digital images of the manuscripts in which these compositions were first notated. Topics explored by the authors include: the processes of, and the goals informing, the linking of manuscript images to music encodings; choices and compromises made in the development process of the MP Editor in order to facilitate its rapid deployment; and the implications of capturing dual encodings – a parts-based encoding that reflects the layout of the original source, and a score-based encoding. Having two encodings of the music data is useful for a variety of activities, including performance and analysis, but also within the editorial process, and for sharing data with other applications. The authors present two case studies that document the possibilities and potential in the interchange of music data between the MP Editor and other applications, specifically, MuRET, an optical music recognition (OMR) tool, and Humdrum analysis tools.The authors gratefully acknowledge The National Endowment for the Humanities, Fonds de recherche du Québec – Société et culture (FRQSC), Bourse au doctorat en recherche (13D - Musique) 2019-B2Z-261749, Alex Morgan, postdoc for the Josquin Research Project (2017), for his work on the “Renaissance dissonance labels” filter in collaboration with Craig Sapp, the Spanish Ministry HISPAMUS project TIN2017-86576-R, and the MultiScore Project, I+D+i PID2020-118447RA-I00, funded by MCIN/AEI/10.13039/50110001103
    corecore