25,145 research outputs found

    An M-QAM Signal Modulation Recognition Algorithm in AWGN Channel

    Full text link
    Computing the distinct features from input data, before the classification, is a part of complexity to the methods of Automatic Modulation Classification (AMC) which deals with modulation classification was a pattern recognition problem. Although the algorithms that focus on MultiLevel Quadrature Amplitude Modulation (M-QAM) which underneath different channel scenarios was well detailed. A search of the literature revealed indicates that few studies were done on the classification of high order M-QAM modulation schemes like128-QAM, 256-QAM, 512-QAM and1024-QAM. This work is focusing on the investigation of the powerful capability of the natural logarithmic properties and the possibility of extracting Higher-Order Cumulant's (HOC) features from input data received raw. The HOC signals were extracted under Additive White Gaussian Noise (AWGN) channel with four effective parameters which were defined to distinguished the types of modulation from the set; 4-QAM~1024-QAM. This approach makes the recognizer more intelligent and improves the success rate of classification. From simulation results, which was achieved under statistical models for noisy channels, manifest that recognized algorithm executes was recognizing in M-QAM, furthermore, most results were promising and showed that the logarithmic classifier works well over both AWGN and different fading channels, as well as it can achieve a reliable recognition rate even at a lower signal-to-noise ratio (less than zero), it can be considered as an Integrated Automatic Modulation Classification (AMC) system in order to identify high order of M-QAM signals that applied a unique logarithmic classifier, to represents higher versatility, hence it has a superior performance via all previous works in automatic modulation identification systemComment: 18 page

    Speech and crosstalk detection in multichannel audio

    Get PDF
    The analysis of scenarios in which a number of microphones record the activity of speakers, such as in a round-table meeting, presents a number of computational challenges. For example, if each participant wears a microphone, speech from both the microphone's wearer (local speech) and from other participants (crosstalk) is received. The recorded audio can be broadly classified in four ways: local speech, crosstalk plus local speech, crosstalk alone and silence. We describe two experiments related to the automatic classification of audio into these four classes. The first experiment attempted to optimize a set of acoustic features for use with a Gaussian mixture model (GMM) classifier. A large set of potential acoustic features were considered, some of which have been employed in previous studies. The best-performing features were found to be kurtosis, "fundamentalness," and cross-correlation metrics. The second experiment used these features to train an ergodic hidden Markov model classifier. Tests performed on a large corpus of recorded meetings show classification accuracies of up to 96%, and automatic speech recognition performance close to that obtained using ground truth segmentation

    A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models

    Get PDF
    Using Hidden Markov Models (HMMs) as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks

    A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models

    Get PDF
    Using Hidden Markov Models (HMMs) as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks
    • …
    corecore