4,842 research outputs found
Fingerprinting Smart Devices Through Embedded Acoustic Components
The widespread use of smart devices gives rise to both security and privacy
concerns. Fingerprinting smart devices can assist in authenticating physical
devices, but it can also jeopardize privacy by allowing remote identification
without user awareness. We propose a novel fingerprinting approach that uses
the microphones and speakers of smart phones to uniquely identify an individual
device. During fabrication, subtle imperfections arise in device microphones
and speakers which induce anomalies in produced and received sounds. We exploit
this observation to fingerprint smart devices through playback and recording of
audio samples. We use audio-metric tools to analyze and explore different
acoustic features and analyze their ability to successfully fingerprint smart
devices. Our experiments show that it is even possible to fingerprint devices
that have the same vendor and model; we were able to accurately distinguish
over 93% of all recorded audio clips from 15 different units of the same model.
Our study identifies the prominent acoustic features capable of fingerprinting
devices with high success rate and examines the effect of background noise and
other variables on fingerprinting accuracy
Deepfake Audio Detection
Deepfakes, algorithms that use Machine Learning (ML) to generate fake yet realistic content, represent one of the premier security challenges in the 21st century. Deepfakes are not limited to just videos, as deepfake audio is a fast-growing field with an enormous number of applications. Recently, multiple Convolutional Neural Network (CNN) based techniques have been developed that generate realistic results that are difficult to distinguish from actual speech. In this work, we extracted audio features from real and synthesized audio files and determined that Mel-Frequency Cepstral Coefficients (MFCCs) in synthesized audio show a significant difference from the MFCCs in real audio. Using Deep Neural Networks (DNNs), experiments were conducted to train classifiers to detect synthesized audio in different datasets, with highly successful results
Saron Music Transcription Based on Rhythmic Information Using HMM on Gamelan Orchestra
Nowadays, eastern music exploration is needed to raise his popularity that has been abandoned
by the people, especially the younger generation. Onset detection in Gamelan music signals are needed to
help beginners follow the beats and the notation. We propose a Hidden Markov Model (HMM) method for
detecting the onset of each event in the saron sound. F-measure of average the onset detection was
analyzed to generate notations. The experiment demonstrates 97.83% F-measure of music transcription
Saron Music Transcription Based on Rhythmic Information using HMM on Gamelan Orchestra
Nowadays, eastern music exploration is needed to raise his popularity that has been abandoned by the people, especially the younger generation. Onset detection in Gamelan music signals are needed to help beginners follow the beats and the notation. We propose a Hidden Markov Model (HMM) method for detecting the onset of each event in the saron sound. F-measure of average the onset detection was analyzed to generate notations. The experiment demonstrates 97.83% F-measure of music transcription.
- …