111 research outputs found
Your Smart Home Can't Keep a Secret: Towards Automated Fingerprinting of IoT Traffic with Neural Networks
The IoT (Internet of Things) technology has been widely adopted in recent
years and has profoundly changed the people's daily lives. However, in the
meantime, such a fast-growing technology has also introduced new privacy
issues, which need to be better understood and measured. In this work, we look
into how private information can be leaked from network traffic generated in
the smart home network. Although researchers have proposed techniques to infer
IoT device types or user behaviors under clean experiment setup, the
effectiveness of such approaches become questionable in the complex but
realistic network environment, where common techniques like Network Address and
Port Translation (NAPT) and Virtual Private Network (VPN) are enabled. Traffic
analysis using traditional methods (e.g., through classical machine-learning
models) is much less effective under those settings, as the features picked
manually are not distinctive any more. In this work, we propose a traffic
analysis framework based on sequence-learning techniques like LSTM and
leveraged the temporal relations between packets for the attack of device
identification. We evaluated it under different environment settings (e.g.,
pure-IoT and noisy environment with multiple non-IoT devices). The results
showed our framework was able to differentiate device types with a high
accuracy. This result suggests IoT network communications pose prominent
challenges to users' privacy, even when they are protected by encryption and
morphed by the network gateway. As such, new privacy protection methods on IoT
traffic need to be developed towards mitigating this new issue
Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks
It is common to implicitly assume access to intelligently captured inputs
(e.g., photos from a human photographer), yet autonomously capturing good
observations is itself a major challenge. We address the problem of learning to
look around: if a visual agent has the ability to voluntarily acquire new views
to observe its environment, how can it learn efficient exploratory behaviors to
acquire informative observations? We propose a reinforcement learning solution,
where the agent is rewarded for actions that reduce its uncertainty about the
unobserved portions of its environment. Based on this principle, we develop a
recurrent neural network-based approach to perform active completion of
panoramic natural scenes and 3D object shapes. Crucially, the learned policies
are not tied to any recognition task nor to the particular semantic content
seen during training. As a result, 1) the learned "look around" behavior is
relevant even for new tasks in unseen environments, and 2) training data
acquisition involves no manual labeling. Through tests in diverse settings, we
demonstrate that our approach learns useful generic policies that transfer to
new unseen tasks and environments. Completion episodes are shown at
https://goo.gl/BgWX3W
Deep Learning Methods for Human Activity Recognition using Wearables
Wearable sensors provide an infrastructure-less multi-modal sensing method. Current
trends point to a pervasive integration of wearables into our lives with these devices
providing the basis for wellness and healthcare applications across rehabilitation,
caring for a growing older population, and improving human performance.
Fundamental to these applications is our ability to automatically and accurately
recognise human activities from often tiny sensors embedded in wearables. In this
dissertation, we consider the problem of human activity recognition (HAR) using
multi-channel time-series data captured by wearable sensors.
Our collective know-how regarding the solution of HAR problems with wearables has
progressed immensely through the use of deep learning paradigms. Nevertheless, this
field still faces unique methodological challenges. As such, this dissertation focuses on
developing end-to-end deep learning frameworks to promote HAR application opportunities
using wearable sensor technologies and to mitigate specific associated challenges. In our
efforts, the investigated problems cover a diverse range of HAR challenges and spans
from fully supervised to unsupervised problem domains.
In order to enhance automatic feature extraction from multi-channel time-series
data for HAR, the problem of learning enriched and highly discriminative activity
feature representations with deep neural networks is considered. Accordingly, novel
end-to-end network elements are designed which: (a) exploit the latent relationships
between multi-channel sensor modalities and specific activities, (b) employ effective
regularisation through data-agnostic augmentation for multi-modal sensor data
streams, and (c) incorporate optimization objectives to encourage minimal intra-class
representation differences, while maximising inter-class differences to achieve more
discriminative features.
In order to promote new opportunities in HAR with emerging battery-less sensing
platforms, the problem of learning from irregularly sampled and temporally sparse readings
captured by passive sensing modalities is considered. For the first time, an efficient
set-based deep learning framework is developed to address the problem. This
framework is able to learn directly from the generated data, bypassing the need for
the conventional interpolation pre-processing stage. In order to address the multi-class window problem and create potential solutions
for the challenging task of concurrent human activity recognition, the problem of
enabling simultaneous prediction of multiple activities for sensory segments is considered.
As such, the flexibility provided by the emerging set learning concepts is further
leveraged to introduce a novel formulation of HAR. This formulation treats HAR
as a set prediction problem and elegantly caters for segments carrying sensor data
from multiple activities. To address this set prediction problem, a unified deep HAR
architecture is designed that: (a) incorporates a set objective to learn mappings from
raw input sensory segments to target activity sets, and (b) precedes the supervised
learning phase with unsupervised parameter pre-training to exploit unlabelled data
for better generalisation performance.
In order to leverage the easily accessible unlabelled activity data-streams to serve
downstream classification tasks, the problem of unsupervised representation learning from
multi-channel time-series data is considered. For the first time, a novel recurrent
generative adversarial (GAN) framework is developed that explores the GAN’s latent
feature space to extract highly discriminating activity features in an unsupervised
fashion. The superiority of the learned representations is substantiated by their
ability to outperform the de facto unsupervised approaches based on autoencoder
frameworks. At the same time, they rival the recognition performance of fully
supervised trained models on downstream classification benchmarks.
In recognition of the scarcity of large-scale annotated sensor datasets and the
tediousness of collecting additional labelled data in this domain, the hitherto unexplored
problem of end-to-end clustering of human activities from unlabelled wearable data is
considered. To address this problem, a first study is presented for the purpose of
developing a stand-alone deep learning paradigm to discover semantically meaningful
clusters of human actions. In particular, the paradigm is intended to: (a) leverage
the inherently sequential nature of sensory data, (b) exploit self-supervision from
reconstruction and future prediction tasks, and (c) incorporate clustering-oriented
objectives to promote the formation of highly discriminative activity clusters. The
systematic investigations in this study create new opportunities for HAR to learn
human activities using unlabelled data that can be conveniently and cheaply collected
from wearables.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202
- …