26 research outputs found

    A study of human mood tagging of musical pieces

    Get PDF
    We conducted a survey in which participants were required to label the mood conveyed within a variety of musical pieces. Two different representations of mood were used, the 2D emotion space as well as updated Hevner mood labels. The results show that survey responses using the two mood representations were both consistent as well as sensible. In terms of music piece characteristics that influenced participant's responses, it has been shown that the intensity/energy, tempo and beat strength consistently influenced participant's mood responses while tonality and pitch did not. Finally, the survey has raised many important questions relating to labeling musical pieces with mood, including the handling of a musical piece conveying more than one mood simultaneously, as well as a musical piece that conveys rapid mood changes

    Diver navigation system acoustic signal encoding/decoding optimisation

    Full text link
    This thesis describes the optimisation of the encoding and decoding processes used to transmit and receive frequency coded data tones acoustically during the operation of an underwater diver navigation system. The aim was to reduce the time required to both generate these data tones for transmission as well as to decode these tones during reception. Encoding of the data tones is performed using a phase lock loop under the control of a microcontroller. A technique was developed which combined both hardware and software modifications to effectively halve the phase lock loop settling time, and therefore the time required to generate these tones. Decoding of these data tones is achieved using the Fast Fourier Transform. Alternative forms of the Discrete Fourier Transform were explored to find the most efficient in terms of execution time. Numerous software optimisations were then applied which led to a reduction in program execution time of 54 % with no penalty in program complexity or length. Testing of the system under identical real-life operating conditions showed no evidence of any system performance degradation as a result of these optimisations

    A neurally-inspired musical instrument classification system based upon the sound onset

    Get PDF
    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset
    corecore