3,572 research outputs found
Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns
We introduce Deep Thermal Imaging, a new approach for close-range automatic
recognition of materials to enhance the understanding of people and ubiquitous
technologies of their proximal environment. Our approach uses a low-cost mobile
thermal camera integrated into a smartphone to capture thermal textures. A deep
neural network classifies these textures into material types. This approach
works effectively without the need for ambient light sources or direct contact
with materials. Furthermore, the use of a deep learning network removes the
need to handcraft the set of features for different materials. We evaluated the
performance of the system by training it to recognise 32 material types in both
indoor and outdoor environments. Our approach produced recognition accuracies
above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584
images of 17 outdoor materials. We conclude by discussing its potentials for
real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing
System
Sensing motion using spectral and spatial analysis of WLAN RSSI
In this paper we present how motion sensing can be obtained just by observing the WLAN radio signal strength and its fluctuations. The temporal, spectral and spatial characteristics of WLAN signal are analyzed. Our analysis
confirms our claim that ’signal strength from access points appear to jump around more vigorously when the device is moving compared to when it is still and the number of detectable access points vary considerably while the user is on the move’. Using this observation, we present a novel motion detection algorithm, Spectrally Spread Motion Detection (SpecSMD) based on the spectral analysis of
WLAN signal’s RSSI. To benchmark the proposed algorithm, we used Spatially Spread Motion Detection (SpatSMD), which is inspired by the recent work of Sohn et al. Both algorithms were evaluated by carrying out extensive measurements
in a diverse set of conditions (indoors in different buildings and outdoors - city center, parking lot, university campus etc.,) and tested against the same
data sets. The 94% average classification accuracy of the proposed SpecSMD is outperforming the accuracy of SpatSMD (accuracy 87%). The motion detection algorithms presented in this paper provide ubiquitous methods for deriving the
state of the user. The algorithms can be implemented and run on a commodity device with WLAN capability without the need of any additional hardware support
Living IoT: A Flying Wireless Platform on Live Insects
Sensor networks with devices capable of moving could enable applications
ranging from precision irrigation to environmental sensing. Using mechanical
drones to move sensors, however, severely limits operation time since flight
time is limited by the energy density of current battery technology. We explore
an alternative, biology-based solution: integrate sensing, computing and
communication functionalities onto live flying insects to create a mobile IoT
platform.
Such an approach takes advantage of these tiny, highly efficient biological
insects which are ubiquitous in many outdoor ecosystems, to essentially provide
mobility for free. Doing so however requires addressing key technical
challenges of power, size, weight and self-localization in order for the
insects to perform location-dependent sensing operations as they carry our IoT
payload through the environment. We develop and deploy our platform on
bumblebees which includes backscatter communication, low-power
self-localization hardware, sensors, and a power source. We show that our
platform is capable of sensing, backscattering data at 1 kbps when the insects
are back at the hive, and localizing itself up to distances of 80 m from the
access points, all within a total weight budget of 102 mg.Comment: Co-primary authors: Vikram Iyer, Rajalakshmi Nandakumar, Anran Wang,
In Proceedings of Mobicom. ACM, New York, NY, USA, 15 pages, 201
IoTSan: Fortifying the Safety of IoT Systems
Today's IoT systems include event-driven smart applications (apps) that
interact with sensors and actuators. A problem specific to IoT systems is that
buggy apps, unforeseen bad app interactions, or device/communication failures,
can cause unsafe and dangerous physical states. Detecting flaws that lead to
such states, requires a holistic view of installed apps, component devices,
their configurations, and more importantly, how they interact. In this paper,
we design IoTSan, a novel practical system that uses model checking as a
building block to reveal "interaction-level" flaws by identifying events that
can lead the system to unsafe states. In building IoTSan, we design novel
techniques tailored to IoT systems, to alleviate the state explosion associated
with model checking. IoTSan also automatically translates IoT apps into a
format amenable to model checking. Finally, to understand the root cause of a
detected vulnerability, we design an attribution mechanism to identify
problematic and potentially malicious apps. We evaluate IoTSan on the Samsung
SmartThings platform. From 76 manually configured systems, IoTSan detects 147
vulnerabilities. We also evaluate IoTSan with malicious SmartThings apps from a
previous effort. IoTSan detects the potential safety violations and also
effectively attributes these apps as malicious.Comment: Proc. of the 14th ACM CoNEXT, 201
ReCon: Revealing and Controlling PII Leaks in Mobile Network Traffic
It is well known that apps running on mobile devices extensively track and
leak users' personally identifiable information (PII); however, these users
have little visibility into PII leaked through the network traffic generated by
their devices, and have poor control over how, when and where that traffic is
sent and handled by third parties. In this paper, we present the design,
implementation, and evaluation of ReCon: a cross-platform system that reveals
PII leaks and gives users control over them without requiring any special
privileges or custom OSes. ReCon leverages machine learning to reveal potential
PII leaks by inspecting network traffic, and provides a visualization tool to
empower users with the ability to control these leaks via blocking or
substitution of PII. We evaluate ReCon's effectiveness with measurements from
controlled experiments using leaks from the 100 most popular iOS, Android, and
Windows Phone apps, and via an IRB-approved user study with 92 participants. We
show that ReCon is accurate, efficient, and identifies a wider range of PII
than previous approaches.Comment: Please use MobiSys version when referencing this work:
http://dl.acm.org/citation.cfm?id=2906392. 18 pages, recon.meddle.mob
Emotion detection from touch interactions during text entry on smartphones
There are different modes of interaction with a software keyboard on a smartphone, such as typing and swyping. Patterns of such touch interactions on a keyboard may reflect emotions of a user. Since users may switch between different touch modalities while using a keyboard, therefore, automatic detection of emotion from touch patterns must consider both modalities in combination to detect the pattern. In this paper, we focus on identifying different features of touch interactions with a smartphone keyboard that lead to a personalized model for inferring user emotion. Since distinguishing typing and swyping activity is important to record the correct features, we designed a technique to correctly identify the modality. The ground truth labels for user emotion are collected directly from the user by periodically collecting self-reports. We jointly model typing and swyping features and correlate them with user provided self-reports to build a personalized machine learning model, which detects four emotion states (happy, sad, stressed, relaxed). We combine these design choices into an Android application TouchSense and evaluate the same in a 3-week in-the-wild study involving 22 participants. Our key evaluation results and post-study participant assessment demonstrate that it is possible to predict these emotion states with an average accuracy (AUCROC) of 73% (std dev. 6%, maximum 87%) combining these two touch interactions only
- …