3 research outputs found
Contrastive Environmental Sound Representation Learning
Machine hearing of the environmental sound is one of the important issues in
the audio recognition domain. It gives the machine the ability to discriminate
between the different input sounds that guides its decision making. In this
work we exploit the self-supervised contrastive technique and a shallow 1D CNN
to extract the distinctive audio features (audio representations) without using
any explicit annotations.We generate representations of a given audio using
both its raw audio waveform and spectrogram and evaluate if the proposed
learner is agnostic to the type of audio input. We further use canonical
correlation analysis (CCA) to fuse representations from the two types of input
of a given audio and demonstrate that the fused global feature results in
robust representation of the audio signal as compared to the individual
representations. The evaluation of the proposed technique is done on both
ESC-50 and UrbanSound8K. The results show that the proposed technique is able
to extract most features of the environmental audio and gives an improvement of
12.8% and 0.9% on the ESC-50 and UrbanSound8K datasets respectively
A Review of Smishing Attaks Mitigation Strategies
Mobile Smishing crime has continued to escalate globally due to technology enhancements and people's growing dependence on smartphones and other technologies. SMS facilitates the distribution of crucial information that is principally important for non-digital savvy users who are typically underprivileged. Smishing, often known as SMS phishing, entails transmitting deceptive text messages to lure someone into revealing individual information or installing malware. The number of incidences of smishing has increased tremendously as the internet and cellphones have spread to even the most remote regions of the globe
Enhancing EEG signals classification using LSTM-CNN architecture
Epilepsy is a condition that disrupts normal brain function and sometimes leads to seizures, unusual sensations, and temporary loss of awareness. Electroencephalograph (EEG) records are commonly used for diagnosing epilepsy, but traditional analysis is subjective and prone to misclassification. Previous studies applied Deep Learning (DL) techniques to improve EEG classification, but their performance has been limited due to dynamic and non-stationary nature of EEG structure. In this paper, we propose a multi-channel EEG classification model called LConvNet, which combines Convolutional Neural Networks (CNN) for spatial feature extraction and Long Short-Term Memory (LSTM) for capturing temporal dependencies. The model is trained using open source secondary EEG data from Temple University Hospital (TUH) to distinguish between epileptic and healthy EEG signals. Our model achieved an impressive accuracy of 97%, surpassing existing EEG classification models used in similar tasks such as EEGNet, DeepConvNet and ShallowConvNet that had 86%, 96% and 78% respectively. Furthermore, our model demonstrated impressive performance in terms of trainability, scalability and parameter efficiency during additional evaluations