41 research outputs found

    Audio Retrieval Using Multiple Feature Vectors

    Get PDF
    Content Based Audio Retrieval system is very helpful to facilitate users to find the target audio materials. Audio signals are classified into speech, music, several types of environmental sounds and silence based on audio content analysis. The extracted audio features include temporal curves of the average zero-crossing rate, the spectral Centroid, the spectral flux, as well as spectral roll-off of these curves. In this dissertation we have used the four features for extracting the audio from the database, use of this multiple features increase the accuracy of the audio file which we are retrieving from the audio database

    URBAN SOUND RECOGNITION USING DIFFERENT FEATURE EXTRACTION TECHNIQUES

    Get PDF
    The application of the advanced methods for noise analysis in the urban areas through the development of systems for classification of sound events significantly improves and simplifies the process of noise assessment. The main purpose of sound recognition and classification systems is to develop algorithms that can detect and classify sound events that occur in the chosen environment, giving an appropriate response to their users. In this research, a supervised system for recognition and classification of sound events has been established through the development of feature extraction techniques based on digital signal processing of the audio signals that are further used as an input parameter in the machine learning algorithms for classification of the sound events. Various audio parameters were extracted and processed in order to choose the best set of parameters that result in better recognition of the class to which the sounds belong. The created acoustic event detection and classification (AED/C) system could be further implemented in sound sensors for automatic control of environmental noise using the source classification that leads to reduced amount of required human validation of the sound level measurements since the target noise source is evidently defined

    Grateful Live: Mixing Multiple Recordings of a Dead Performance into an Immersive Experience

    Get PDF
    date-added: 2016-08-23 17:17:44 +0000 date-modified: 2016-08-23 17:22:38 +0000date-added: 2016-08-23 17:17:44 +0000 date-modified: 2016-08-23 17:22:38 +0000date-added: 2016-08-23 17:17:44 +0000 date-modified: 2016-08-23 17:22:38 +0000date-added: 2016-08-23 17:17:44 +0000 date-modified: 2016-08-23 17:22:38 +000

    The potential of bioacoustics for surveying carrion insects

    Get PDF
    Knowledge of the sequential cadaver colonization by carrion insects is fundamental for post-mortem interval (PMI) estimation. Creating local empirical data on succession by trapping insects is time consuming, dependent on accessibility/environmental conditions and can be biased by sampling practices including disturbance to decomposing remains and sampling interval. To overcome these limitations, audio identification of species using their wing beats is being evaluated as a potential tool to survey and build local databases of carrion species. The results could guide the focus of forensic entomologists for further developmental studies on the local dominant species, and ultimately to improve PMI estimations. However, there are challenges associated with this approach that must be addressed. Wing beat frequency is influenced by both abiotic and biotic factors including temperature, humidity, age, size, and sex. The audio recording and post-processing must be customized for different species and their influencing factors. Furthermore, detecting flight sounds amid background noise and a multitude of species in the field can pose an additional challenge. Nonetheless, previous studies have successfully identified several fly species based on wing beat sounds. Combined with advances in machine learning, the analysis of bioacoustics data is likely to offer a powerful diagnostic tool for use in species identification.</p

    Diversity-Robust Acoustic Feature Signatures Based on Multiscale Fractal Dimension for Similarity Search of Environmental Sounds

    Full text link
    This paper proposes new acoustic feature signatures based on the multiscale fractal dimension (MFD), which are robust against the diversity of environmental sounds, for the content-based similarity search. The diversity of sound sources and acoustic compositions is a typical feature of environmental sounds. Several acoustic features have been proposed for environmental sounds. Among them is the widely-used Mel-Frequency Cepstral Coefficients (MFCCs), which describes frequency-domain features. However, in addition to these features in the frequency domain, environmental sounds have other important features in the time domain with various time scales. In our previous paper, we proposed enhanced multiscale fractal dimension signature (EMFD) for environmental sounds. This paper extends EMFD by using the kernel density estimation method, which results in better performance of the similarity search tasks. Furthermore, it newly proposes another acoustic feature signature based on MFD, namely very-long-range multiscale fractal dimension signature (MFD-VL). The MFD-VL signature describes several features of the time-varying envelope for long periods of time. The MFD-VL signature has stability and robustness against background noise and small fluctuations in the parameters of sound sources, which are produced in field recordings. We discuss the effectiveness of these signatures in the similarity sound search by comparing with acoustic features proposed in the DCASE 2018 challenges. Due to the unique descriptiveness of our proposed signatures, we confirmed the signatures are effective when they are used with other acoustic features.Comment: 15 pages, 14 figure
    corecore