4 research outputs found

    Sound to scale to sound, a setup for microtonal exploration and composition

    Get PDF
    This paper elaborates on a setup for microtonal exploration, experimentation and composition. Where the initial design of the software Tarsos aimed for the scale analysis of ethnic music recordings, it turned out to deliver a flexible platform for pitch exploration of any kind of music. Scales from ethnic music, but also theoretically designed scales and scales from musical practice, can be analyzed in great detail and can be adapted by a flexible interface with auditory feedback. The output, the scales, are written into the standardized Scala format, and can be used in a MIDI-to-WAV converter that renders a MIDI file into audio tuned using the scale. This setup creates an environment for tone scale exploration that can be used for microtonal composition

    A novel chroma representation of polyphonic music based on multiple pitch tracking techniques

    Get PDF
    It is common practice to map the frequency content of music onto a chroma representation, but there exist many different schemes for constructing such a representation. In this paper, a new scheme is proposed. It comprises a detection of salient frequencies, a conversion of salient frequencies to notes, a psychophysically motivated weighting of harmonics in support of a note, a restriction of harmonic relations between different notes and a restriction of the deviations from a predefined pitch scale (e.g. the equally tempered western scale). A large-scale experimental evaluation has confirmed that the novel chroma representation more closely matches manual chord labels than the representations generated by six other tested schemes. Therefore, the new chroma representation is expected to improve applications such as song similarity matching and chord detection and labeling

    Recent Improvements of an Auditory Model Based Front-End for the Transcription of Vocal Queries

    No full text
    In this paper recent improvements of an existing acoustic frontend for the transcription of vocal (hummed, sung) musical queries is presented. Thanks to the addition of a new second pitch extractor and the introduction of a novel multi-stage segmentation algorithm, the application domain of the front-end could be extended to whistled queries, and on top of that, the performance on the other two query types could be improved. Experiments have shown that the new system can transcribe vocal queries with an accuracy ranging from 76 % (whistling) to 85 % (humming), and that it clearly outperforms other state-of-the art systems on all three query types. 1
    corecore