26 research outputs found

    Change Point Determination in Audio Data Using Auditory Features

    Get PDF
    The study is aimed to investigate the properties of auditory-based features for audio change point detection process. In the performed analysis, two popular techniques have been used: a metric-based approach and the BIC scheme. The efficiency of the change point detection process depends on the type and size of the feature space. Therefore, we have compared two auditory-based feature sets (MFCC and GTEAD) in both change point detection schemes. We have proposed a new technique based on multiscale analysis to determine the content change in the audio data. The comparison of the two typical change point detection techniques with two different feature spaces has been performed on the set of acoustical scenes with single change point. As the results show, the accuracy of the detected positions depends on the feature type, feature space dimensionality, detection technique and the type of audio data. In case of the BIC approach, the better accuracy has been obtained for MFCC feature space in the most cases. However, the change point detection with this feature results in a lower detection ratio in comparison to the GTEAD feature. Using the same criteria as for BIC, the proposed multiscale metric-based technique has been executed. In such case, the use of the GTEAD feature space has led to better accuracy. We have shown that the proposed multiscale change point detection scheme is competitive to the BIC scheme with the MFCC feature space

    Prominence Driven Character Animation

    Get PDF
    This paper details the development of a fully automated system for character animation implemented in Autodesk Maya. The system uses prioritised speech events to algorithmically generate head, body, arms and leg movements alongside eyeblinks, eyebrow movements and lip-synching. In addition, gaze tracking is also generated automatically relative to the definition of focus objects- contextually important objects in the character\u27s worldview. The plugin uses an animation profile to store the relevant controllers and movements for a specific character, allowing any character to run with the system. Once a profile has been created, an audio file can be loaded and animated with a single button click. The average time to animate is between 2-3 minutes for 1 minute of speech, and the plugin can be used either as a first pass system for high quality work or as part of a batch animation workflow for larger amounts of content as exemplified in television and online dissemination channels

    Spectral-based Features Ranking for Gamelan Instruments Identification using Filter Techniques

    Get PDF
     In this paper, we describe an approach of spectral-based features ranking for Javanese gamelan instruments identification using filter techniques. The model extracted spectral-based features set of the signal using Short Time Fourier Transform (STFT). The rank of the features was determined using the five algorithms; namely ReliefF, Chi-Squared, Information Gain, Gain Ratio, and Symmetric Uncertainty. Then, we tested the ranked features by cross validation using Support Vector Machine (SVM). The experiment showed that Gain Ratio algorithm gave the best result, it yielded accuracy of 98.93%

    Acoustic Scene Classification

    Get PDF
    This work was supported by the Centre for Digital Music Platform (grant EP/K009559/1) and a Leadership Fellowship (EP/G007144/1) both from the United Kingdom Engineering and Physical Sciences Research Council

    Biologically-inspired neural coding of sound onset for a musical sound classification task

    Get PDF
    A biologically-inspired neural coding scheme for the early auditory system is outlined. The cochlea response is simulated with a passive gammatone filterbank. The output of each bandpass filter is spike-encoded using a zero-crossing based method over a range of sensitivity levels. The scheme is inspired by the highly parallellised nature of the auditory nerve innervation within the cochlea. A key aspect of early auditory processing is simulated, namely that of onset detection, using leaky integrate-and-fire neuron models. Finally, a time-domain neural network (the echo state network) is used to tackle the what task of auditory perception using the output of the onset detection neuron alone

    Determination of Pitch Range Based on Onset and Offset Analysis in Modulation Frequency Domain

    Get PDF
    Auditory scene in a natural environment contains multiple sources. Auditory scene analysis (ASA) is the process in which the auditory system segregates a scene into streams corresponding to different sources. The determination of range of pitch frequency is necessary for segmentation. We propose a system to determine the range of pitch frequency by analyzing onsets and offsets in modulation frequency domain. In the proposed system, first the modulation spectrum of speech is calculated and then, in each subband onsets and offsets will be detected. Thereafter, the segments are generated by matching corresponding onset and offset front. Finally, by choosing the desired segments, the rage of pitch frequency is determined. Systematic evaluation shows that the range of pitch frequency is estimated with good accuracy

    An iterative model-based approach to cochannel speech separation

    Get PDF
    corecore