2,296 research outputs found
MilliSonic: Pushing the Limits of Acoustic Motion Tracking
Recent years have seen interest in device tracking and localization using
acoustic signals. State-of-the-art acoustic motion tracking systems however do
not achieve millimeter accuracy and require large separation between
microphones and speakers, and as a result, do not meet the requirements for
many VR/AR applications. Further, tracking multiple concurrent acoustic
transmissions from VR devices today requires sacrificing accuracy or frame
rate. We present MilliSonic, a novel system that pushes the limits of acoustic
based motion tracking. Our core contribution is a novel localization algorithm
that can provably achieve sub-millimeter 1D tracking accuracy in the presence
of multipath, while using only a single beacon with a small 4-microphone
array.Further, MilliSonic enables concurrent tracking of up to four smartphones
without reducing frame rate or accuracy. Our evaluation shows that MilliSonic
achieves 0.7mm median 1D accuracy and a 2.6mm median 3D accuracy for
smartphones, which is 5x more accurate than state-of-the-art systems.
MilliSonic enables two previously infeasible interaction applications: a) 3D
tracking of VR headsets using the smartphone as a beacon and b) fine-grained 3D
tracking for the Google Cardboard VR system using a small microphone array
DoubleEcho: Mitigating Context-Manipulation Attacks in Copresence Verification
Copresence verification based on context can improve usability and strengthen
security of many authentication and access control systems. By sensing and
comparing their surroundings, two or more devices can tell whether they are
copresent and use this information to make access control decisions. To the
best of our knowledge, all context-based copresence verification mechanisms to
date are susceptible to context-manipulation attacks. In such attacks, a
distributed adversary replicates the same context at the (different) locations
of the victim devices, and induces them to believe that they are copresent. In
this paper we propose DoubleEcho, a context-based copresence verification
technique that leverages acoustic Room Impulse Response (RIR) to mitigate
context-manipulation attacks. In DoubleEcho, one device emits a wide-band
audible chirp and all participating devices record reflections of the chirp
from the surrounding environment. Since RIR is, by its very nature, dependent
on the physical surroundings, it constitutes a unique location signature that
is hard for an adversary to replicate. We evaluate DoubleEcho by collecting RIR
data with various mobile devices and in a range of different locations. We show
that DoubleEcho mitigates context-manipulation attacks whereas all other
approaches to date are entirely vulnerable to such attacks. DoubleEcho detects
copresence (or lack thereof) in roughly 2 seconds and works on commodity
devices
Inferring Room Semantics Using Acoustic Monitoring
Having knowledge of the environmental context of the user i.e. the knowledge
of the users' indoor location and the semantics of their environment, can
facilitate the development of many of location-aware applications. In this
paper, we propose an acoustic monitoring technique that infers semantic
knowledge about an indoor space \emph{over time,} using audio recordings from
it. Our technique uses the impulse response of these spaces as well as the
ambient sounds produced in them in order to determine a semantic label for
them. As we process more recordings, we update our \emph{confidence} in the
assigned label. We evaluate our technique on a dataset of single-speaker human
speech recordings obtained in different types of rooms at three university
buildings. In our evaluation, the confidence\emph{ }for the true label
generally outstripped the confidence for all other labels and in some cases
converged to 100\% with less than 30 samples.Comment: 2017 IEEE International Workshop on Machine Learning for Signal
Processing, Sept.\ 25--28, 2017, Tokyo, Japa
Deep Room Recognition Using Inaudible Echos
Recent years have seen the increasing need of location awareness by mobile
applications. This paper presents a room-level indoor localization approach
based on the measured room's echos in response to a two-millisecond single-tone
inaudible chirp emitted by a smartphone's loudspeaker. Different from other
acoustics-based room recognition systems that record full-spectrum audio for up
to ten seconds, our approach records audio in a narrow inaudible band for 0.1
seconds only to preserve the user's privacy. However, the short-time and
narrowband audio signal carries limited information about the room's
characteristics, presenting challenges to accurate room recognition. This paper
applies deep learning to effectively capture the subtle fingerprints in the
rooms' acoustic responses. Our extensive experiments show that a two-layer
convolutional neural network fed with the spectrogram of the inaudible echos
achieve the best performance, compared with alternative designs using other raw
data formats and deep models. Based on this result, we design a RoomRecognize
cloud service and its mobile client library that enable the mobile application
developers to readily implement the room recognition functionality without
resorting to any existing infrastructures and add-on hardware.
Extensive evaluation shows that RoomRecognize achieves 99.7%, 97.7%, 99%, and
89% accuracy in differentiating 22 and 50 residential/office rooms, 19 spots in
a quiet museum, and 15 spots in a crowded museum, respectively. Compared with
the state-of-the-art approaches based on support vector machine, RoomRecognize
significantly improves the Pareto frontier of recognition accuracy versus
robustness against interfering sounds (e.g., ambient music).Comment: 29 page
- …