10,551 research outputs found
Synesthesia: Detecting Screen Content via Remote Acoustic Side Channels
We show that subtle acoustic noises emanating from within computer screens
can be used to detect the content displayed on the screens. This sound can be
picked up by ordinary microphones built into webcams or screens, and is
inadvertently transmitted to other parties, e.g., during a videoconference call
or archived recordings. It can also be recorded by a smartphone or "smart
speaker" placed on a desk next to the screen, or from as far as 10 meters away
using a parabolic microphone.
Empirically demonstrating various attack scenarios, we show how this channel
can be used for real-time detection of on-screen text, or users' input into
on-screen virtual keyboards. We also demonstrate how an attacker can analyze
the audio received during video call (e.g., on Google Hangout) to infer whether
the other side is browsing the web in lieu of watching the video call, and
which web site is displayed on their screen
Efficient Invariant Features for Sensor Variability Compensation in Speaker Recognition
In this paper, we investigate the use of invariant features for speaker recognition. Owing to their characteristics, these features are introduced to cope with the difficult and challenging problem of sensor variability and the source of performance degradation inherent in speaker recognition systems. Our experiments show: (1) the effectiveness of these features in match cases; (2) the benefit of combining these features with the mel frequency cepstral coefficients to exploit their discrimination power under uncontrolled conditions (mismatch cases). Consequently, the proposed invariant features result in a performance improvement as demonstrated by a reduction in the equal error rate and the minimum decision cost function compared to the GMM-UBM speaker recognition systems based on MFCC features
Towards Deep Learning Models for Psychological State Prediction using Smartphone Data: Challenges and Opportunities
There is an increasing interest in exploiting mobile sensing technologies and
machine learning techniques for mental health monitoring and intervention.
Researchers have effectively used contextual information, such as mobility,
communication and mobile phone usage patterns for quantifying individuals' mood
and wellbeing. In this paper, we investigate the effectiveness of neural
network models for predicting users' level of stress by using the location
information collected by smartphones. We characterize the mobility patterns of
individuals using the GPS metrics presented in the literature and employ these
metrics as input to the network. We evaluate our approach on the open-source
StudentLife dataset. Moreover, we discuss the challenges and trade-offs
involved in building machine learning models for digital mental health and
highlight potential future work in this direction.Comment: 6 pages, 2 figures, In Proceedings of the NIPS Workshop on Machine
Learning for Healthcare 2017 (ML4H 2017). Colocated with NIPS 201
- …