11,817 research outputs found

    Automatic recognition of Persian musical modes in audio musical signals

    Get PDF
    This research proposes new approaches for computational identification of Persian musical modes. This involves constructing a database of audio musical files and developing computer algorithms to perform a musical analysis of the samples. Essential features, the spectral average, chroma, and pitch histograms, and the use of symbolic data, are discussed and compared. A tonic detection algorithm is developed to align the feature vectors and to make the mode recognition methods independent of changes in tonality. Subsequently, a geometric distance measure, such as the Manhattan distance, which is preferred, and cross correlation, or a machine learning method (the Gaussian Mixture Models), is used to gauge similarity between a signal and a set of templates that are constructed in the training phase, in which data-driven patterns are made for each dastgĆ h (Persian mode). The effects of the following parameters are considered and assessed: the amount of training data; the parts of the frequency range to be used for training; down sampling; tone resolution (12-TET, 24-TET, 48-TET and 53-TET); the effect of using overlapping or nonoverlapping frames; and silence and high-energy suppression in pre-processing. The santur (hammered string instrument), which is extensively used in the musical database samples, is described and its physical properties are characterised; the pitch and harmonic deviations characteristic of it are measured; and the inharmonicity factor of the instrument is calculated for the first time. The results are applicable to Persian music and to other closely related musical traditions of the Mediterranean and the Near East. This approach enables content-based analyses of, and content-based searches of, musical archives. Potential applications of this research include: music information retrieval, audio snippet (thumbnailing), music archiving and access to archival content, audio compression and coding, associating of images with audio content, music transcription, music synthesis, music editors, music instruction, automatic music accompaniment, and setting new standards and symbols for musical notation

    The KW-boundary hybrid digital waveguide mesh for room acoustics applications

    Get PDF
    The digital waveguide mesh is a discrete-time simulation used to model acoustic wave propagation through a bounded medium. It can be applied to the simulation of the acoustics of rooms through the generation of impulse responses suitable for auralization purposes. However, large-scale three-dimensional mesh structures are required for high quality results. These structures must therefore be efficient and also capable of flexible boundary implementation in terms of both geometrical layout and the possibility for improved mesh termination algorithms. The general one-dimensional N-port boundary termination is investigated, where N depends on the geometry of the modeled domain and the mesh topology used. The equivalence between physical variable Kirchoff-model, and scattering-based wave-model boundary formulations is proved. This leads to the KW-hybrid one-dimensional N-port boundary-node termination, which is shown to be equivalent to the Kirchoff- and wave-model cases. The KW-hybrid boundary-node is implemented as part of a new hybrid two-dimensional triangular digital waveguide mesh. This is shown to offer the possibility for large-scale, computationally efficient mesh structures for more complex shapes. It proves more accurate than a similar rectilinear mesh in terms of geometrical fit, and offers significant savings in processing time and memory use over a standard wave-based model. The new hybrid mesh also has the potential for improved real-world room boundary simulations through the inclusion of additional mixed modeling algorithms

    The Semantic Web MIDI Tape: An Interface for Interlinking MIDI and Context Metadata

    Get PDF
    The Linked Data paradigm has been used to publish a large number of musical datasets and ontologies on the Semantic Web, such as MusicBrainz, AcousticBrainz, and the Music Ontology. Recently, the MIDI Linked Data Cloud has been added to these datasets, representing more than 300,000 pieces in MIDI format as Linked Data, opening up the possibility for linking fine-grained symbolic music representations to existing music metadata databases. Despite the dataset making MIDI resources available in Web data standard formats such as RDF and SPARQL, the important issue of finding meaningful links between these MIDI resources and relevant contextual metadata in other datasets remains. A fundamental barrier for the provision and generation of such links is the difficulty that users have at adding new MIDI performance data and metadata to the platform. In this paper, we propose the Semantic Web MIDI Tape, a set of tools and associated interface for interacting with the MIDI Linked Data Cloud by enabling users to record, enrich, and retrieve MIDI performance data and related metadata in native Web data standards. The goal of such interactions is to find meaningful links between published MIDI resources and their relevant contextual metadata. We evaluate the Semantic Web MIDI Tape in various use cases involving user-contributed content, MIDI similarity querying, and entity recognition methods, and discuss their potential for finding links between MIDI resources and metadata
    • ā€¦
    corecore