7,188 research outputs found
A Detailed Investigation into Low-Level Feature Detection in Spectrogram Images
Being the first stage of analysis within an image, low-level feature detection is a crucial step in the image analysis process and, as such, deserves suitable attention. This paper presents a systematic investigation into low-level feature detection in spectrogram images. The result of which is the identification of frequency tracks. Analysis of the literature identifies different strategies for accomplishing low-level feature detection. Nevertheless, the advantages and disadvantages of each are not explicitly investigated. Three model-based detection strategies are outlined, each extracting an increasing amount of information from the spectrogram, and, through ROC analysis, it is shown that at increasing levels of extraction the detection rates increase. Nevertheless, further investigation suggests that model-based detection has a limitation—it is not computationally feasible to fully evaluate the model of even a simple sinusoidal track. Therefore, alternative approaches, such as dimensionality reduction, are investigated to reduce the complex search space. It is shown that, if carefully selected, these techniques can approach the detection rates of model-based strategies that perform the same level of information extraction. The implementations used to derive the results presented within this paper are available online from http://stdetect.googlecode.com
Automated Atrial Fibrillation Detection from Electrocardiogram
In this study, a novel Atrial Fibrillation (AFib) detection algorithm is presented based on Electrocardiography (ECG) signals. In particular, the spectrogram of ECG signal is used as an input to a Convolutional Neural Network (CNN) to classify normal and AFib ECG signals. This model is shown to perform well with an accuracy of 92.91% and a value of 0.9789 for the area under the ROC curve (AUC). This study demonstrated the potential of using image classification methods and CNN model to detect abnormal biosignals with noise
A Convolutional Neural Network model based on Neutrosophy for Noisy Speech Recognition
Convolutional neural networks are sensitive to unknown noisy condition in the
test phase and so their performance degrades for the noisy data classification
task including noisy speech recognition. In this research, a new convolutional
neural network (CNN) model with data uncertainty handling; referred as NCNN
(Neutrosophic Convolutional Neural Network); is proposed for classification
task. Here, speech signals are used as input data and their noise is modeled as
uncertainty. In this task, using speech spectrogram, a definition of
uncertainty is proposed in neutrosophic (NS) domain. Uncertainty is computed
for each Time-frequency point of speech spectrogram as like a pixel. Therefore,
uncertainty matrix with the same size of spectrogram is created in NS domain.
In the next step, a two parallel paths CNN classification model is proposed.
Speech spectrogram is used as input of the first path and uncertainty matrix
for the second path. The outputs of two paths are combined to compute the final
output of the classifier. To show the effectiveness of the proposed method, it
has been compared with conventional CNN on the isolated words of Aurora2
dataset. The proposed method achieves the average accuracy of 85.96 in noisy
train data. It is more robust against Car, Airport and Subway noises with
accuracies 90, 88 and 81 in test sets A, B and C, respectively. Results show
that the proposed method outperforms conventional CNN with the improvement of
6, 5 and 2 percentage in test set A, test set B and test sets C, respectively.
It means that the proposed method is more robust against noisy data and handle
these data effectively.Comment: International conference on Pattern Recognition and Image Analysis
(IPRIA 2019
Classification of Arrhythmia by Using Deep Learning with 2-D ECG Spectral Image Representation
The electrocardiogram (ECG) is one of the most extensively employed signals
used in the diagnosis and prediction of cardiovascular diseases (CVDs). The ECG
signals can capture the heart's rhythmic irregularities, commonly known as
arrhythmias. A careful study of ECG signals is crucial for precise diagnoses of
patients' acute and chronic heart conditions. In this study, we propose a
two-dimensional (2-D) convolutional neural network (CNN) model for the
classification of ECG signals into eight classes; namely, normal beat,
premature ventricular contraction beat, paced beat, right bundle branch block
beat, left bundle branch block beat, atrial premature contraction beat,
ventricular flutter wave beat, and ventricular escape beat. The one-dimensional
ECG time series signals are transformed into 2-D spectrograms through
short-time Fourier transform. The 2-D CNN model consisting of four
convolutional layers and four pooling layers is designed for extracting robust
features from the input spectrograms. Our proposed methodology is evaluated on
a publicly available MIT-BIH arrhythmia dataset. We achieved a state-of-the-art
average classification accuracy of 99.11\%, which is better than those of
recently reported results in classifying similar types of arrhythmias. The
performance is significant in other indices as well, including sensitivity and
specificity, which indicates the success of the proposed method.Comment: 14 pages, 5 figures, accepted for future publication in Remote
Sensing MDPI Journa
SubSpectralNet - Using Sub-Spectrogram based Convolutional Neural Networks for Acoustic Scene Classification
Acoustic Scene Classification (ASC) is one of the core research problems in
the field of Computational Sound Scene Analysis. In this work, we present
SubSpectralNet, a novel model which captures discriminative features by
incorporating frequency band-level differences to model soundscapes. Using
mel-spectrograms, we propose the idea of using band-wise crops of the input
time-frequency representations and train a convolutional neural network (CNN)
on the same. We also propose a modification in the training method for more
efficient learning of the CNN models. We first give a motivation for using
sub-spectrograms by giving intuitive and statistical analyses and finally we
develop a sub-spectrogram based CNN architecture for ASC. The system is
evaluated on the public ASC development dataset provided for the "Detection and
Classification of Acoustic Scenes and Events" (DCASE) 2018 Challenge. Our best
model achieves an improvement of +14% in terms of classification accuracy with
respect to the DCASE 2018 baseline system. Code and figures are available at
https://github.com/ssrp/SubSpectralNetComment: Accepted to IEEE International Conference on Acoustics, Speech, and
Signal Processing (ICASSP) 201
- …