3 research outputs found

    Detection of helicopters using neural nets

    Get PDF
    Artificial neural networks (ANNs), in combination with parametric spectral representation techniques, are applied for the detection of helicopter sound. Training of the ANN detectors was based on simulated helicopter sound from four helicopters and a variety of nonhelicopter sounds. Coding techniques based on linear prediction coefficients (LPCs) have been applied to obtain spectral estimates of the acoustic signals. Other forms of the LPC parameters such as reflection coefficients, cepstrum coefficients, and line spectral pairs (LSPs) have also been used as feature vectors for the training and testing of the ANN detectors. We have also investigated the use of wavelet transform for signal de-noising prior to feature extraction. The performance of various feature extraction techniques is evaluated in terms of their detection accurac

    Implementation of an Intelligent Target Classifier with Bicoherence Feature Set

    Get PDF
    ABSTRACT: This paper examines the feasibility of bispectral analysing of acoustic signals emanated from underwater targets, for the purpose of classification. Higher order analysis, especially bispectral analysis has been widely used to analyse signals when non-Gaussianity and non-linearity are involved. Bicoherence, which is a normalized form of bispectrum, has been used to extract source specific features, which is finally fed to a neural network classifier. Vector quantization has been used to reduce the dimensionality of the feature set, thereby reducing computational costs. Simulations were carried out with linear, tan and log-sigmoid transfer functions and also with different code book sizes. It is found that the bicoherence feature set can provide acceptable levels of classification accuracy with a properly trained neural network classifier

    Masked Conditional Neural Networks for Sound Recognition

    Get PDF
    Sound recognition has been studied for decades to grant machines the human hearing ability. The advances in this field help in a range of applications, from industrial ones such as fault detection in machines and noise monitoring to household applications such as surveillance and hearing aids. The problem of sound recognition like any pattern recognition task involves the reliability of the extracted features and the recognition model. The problem has been approached through decades of crafted features used collaboratively with models based on neural networks or statistical models such as Gaussian Mixtures and Hidden Markov models. Neural networks are currently being considered as a method to automate the feature extraction stage together with the already incorporated role of recognition. The performance of such models is approaching handcrafted features. Current neural network based models are not primarily designed for the nature of the sound signal, which may not optimally harness distinctive properties of the signal. This thesis proposes neural network models that exploit the nature of the time-frequency representation of the sound signal. We propose the ConditionaL Neural Network (CLNN) and the Masked ConditionaL Neural Network (MCLNN). The CLNN is designed to account for the temporal dimension of a signal and behaves as the framework for the MCLNN. The MCLNN allows a filterbank-like behaviour to be embedded within the network using a specially designed binary mask. The masking subdivides the frequency range of a signal into bands and allows concurrent consideration of different feature combinations analogous to the manual handcrafting of the optimum set of features for a recognition task. The proposed models have been evaluated through an extensive set of experiments using a range of publicly available datasets of music genres and environmental sounds, where they surpass state-of-the-art Convolutional Neural Networks and several hand-crafted attempts
    corecore