2,432 research outputs found

    Synesthesia: Detecting Screen Content via Remote Acoustic Side Channels

    Full text link
    We show that subtle acoustic noises emanating from within computer screens can be used to detect the content displayed on the screens. This sound can be picked up by ordinary microphones built into webcams or screens, and is inadvertently transmitted to other parties, e.g., during a videoconference call or archived recordings. It can also be recorded by a smartphone or "smart speaker" placed on a desk next to the screen, or from as far as 10 meters away using a parabolic microphone. Empirically demonstrating various attack scenarios, we show how this channel can be used for real-time detection of on-screen text, or users' input into on-screen virtual keyboards. We also demonstrate how an attacker can analyze the audio received during video call (e.g., on Google Hangout) to infer whether the other side is browsing the web in lieu of watching the video call, and which web site is displayed on their screen

    Gear wear process monitoring using acoustic signals

    Get PDF
    Airborne acoustic signals contain valuable information from machines and can be detected remotely for condition monitoring. However, the signal is often seriously contaminated by various noises from the environment as well as nearby machines. This paper presents an acoustic based method of monitoring a two stage helical gearbox, a common power transmission system used in various industries. A single microphone is employed to measure the acoustics of the gearbox under-going a run-to-failure test. To suppress the background noise and interferences from nearby ma-chines a modulation signal bispectrum (MSB) analysis is applied to the signal. It is shown that the analysis allows the meshing frequency components and the associated shaft modulating components to be captured more accurately to set up a clear monitoring trend to indicate the tooth wear of the gears under test. The results demonstrate that acoustic signals in conjunction with efficient signal processing methods provide an effective monitoring of the gear transmission process

    An unsupervised acoustic fall detection system using source separation for sound interference suppression

    Get PDF
    We present a novel unsupervised fall detection system that employs the collected acoustic signals (footstep sound signals) from an elderly person’s normal activities to construct a data description model to distinguish falls from non-falls. The measured acoustic signals are initially processed with a source separation (SS) technique to remove the possible interferences from other background sound sources. Mel-frequency cepstral coefficient (MFCC) features are next extracted from the processed signals and used to construct a data description model based on a one class support vector machine (OCSVM) method, which is finally applied to distinguish fall from non-fall sounds. Experiments on a recorded dataset confirm that our proposed fall detection system can achieve better performance, especially with high level of interference from other sound sources, as compared with existing single microphone based methods

    Symphony: Localizing Multiple Acoustic Sources with a Single Microphone Array

    Full text link
    Sound recognition is an important and popular function of smart devices. The location of sound is basic information associated with the acoustic source. Apart from sound recognition, whether the acoustic sources can be localized largely affects the capability and quality of the smart device's interactive functions. In this work, we study the problem of concurrently localizing multiple acoustic sources with a smart device (e.g., a smart speaker like Amazon Alexa). The existing approaches either can only localize a single source, or require deploying a distributed network of microphone arrays to function. Our proposal called Symphony is the first approach to tackle the above problem with a single microphone array. The insight behind Symphony is that the geometric layout of microphones on the array determines the unique relationship among signals from the same source along the same arriving path, while the source's location determines the DoAs (direction-of-arrival) of signals along different arriving paths. Symphony therefore includes a geometry-based filtering module to distinguish signals from different sources along different paths and a coherence-based module to identify signals from the same source. We implement Symphony with different types of commercial off-the-shelf microphone arrays and evaluate its performance under different settings. The results show that Symphony has a median localization error of 0.694m, which is 68% less than that of the state-of-the-art approach

    Six Noise Type Military Sound Classifier

    Get PDF
    Blast noise from military installations often has a negative impact on the quality of life of residents living in nearby communities. This negatively impacts the military's testing \& training capabilities due to restrictions, curfews, or range closures enacted to address noise complaints. In order to more directly manage noise around military installations, accurate noise monitoring has become a necessity. Although most noise monitors are simple sound level meters, more recent ones are capable of discerning blasts from ambient noise with some success. Investigators at the University of Pittsburgh previously developed a more advanced noise classifier that can discern between wind, aircraft, and blast noise, while simultaneously lowering the measurement threshold. Recent work will be presented from the development of a more advanced classifier that identifies additional classes of noise such as machine gun fire, vehicles, and thunder. Additional signal metrics were explored given the increased complexity of the classifier. By broadening the types of noise the system can accurately classify and increasing the number of metrics, a new system was developed with increased blast noise accuracy, decreased number of missed events, and significantly fewer false positives

    Protecting Voice Controlled Systems Using Sound Source Identification Based on Acoustic Cues

    Full text link
    Over the last few years, a rapidly increasing number of Internet-of-Things (IoT) systems that adopt voice as the primary user input have emerged. These systems have been shown to be vulnerable to various types of voice spoofing attacks. Existing defense techniques can usually only protect from a specific type of attack or require an additional authentication step that involves another device. Such defense strategies are either not strong enough or lower the usability of the system. Based on the fact that legitimate voice commands should only come from humans rather than a playback device, we propose a novel defense strategy that is able to detect the sound source of a voice command based on its acoustic features. The proposed defense strategy does not require any information other than the voice command itself and can protect a system from multiple types of spoofing attacks. Our proof-of-concept experiments verify the feasibility and effectiveness of this defense strategy.Comment: Proceedings of the 27th International Conference on Computer Communications and Networks (ICCCN), Hangzhou, China, July-August 2018. arXiv admin note: text overlap with arXiv:1803.0915

    An unsupervised acoustic fall detection system using source separation for sound interference suppression

    Get PDF
    We present a novel unsupervised fall detection system that employs the collected acoustic signals (footstep sound signals) from an elderly person׳s normal activities to construct a data description model to distinguish falls from non-falls. The measured acoustic signals are initially processed with a source separation (SS) technique to remove the possible interferences from other background sound sources. Mel-frequency cepstral coefficient (MFCC) features are next extracted from the processed signals and used to construct a data description model based on a one class support vector machine (OCSVM) method, which is finally applied to distinguish fall from non-fall sounds. Experiments on a recorded dataset confirm that our proposed fall detection system can achieve better performance, especially with high level of interference from other sound sources, as compared with existing single microphone based methods
    corecore