34,626 research outputs found
Multi-task Self-Supervised Learning for Human Activity Detection
Deep learning methods are successfully used in applications pertaining to
ubiquitous computing, health, and well-being. Specifically, the area of human
activity recognition (HAR) is primarily transformed by the convolutional and
recurrent neural networks, thanks to their ability to learn semantic
representations from raw input. However, to extract generalizable features,
massive amounts of well-curated data are required, which is a notoriously
challenging task; hindered by privacy issues, and annotation costs. Therefore,
unsupervised representation learning is of prime importance to leverage the
vast amount of unlabeled data produced by smart devices. In this work, we
propose a novel self-supervised technique for feature learning from sensory
data that does not require access to any form of semantic labels. We learn a
multi-task temporal convolutional network to recognize transformations applied
on an input signal. By exploiting these transformations, we demonstrate that
simple auxiliary tasks of the binary classification result in a strong
supervisory signal for extracting useful features for the downstream task. We
extensively evaluate the proposed approach on several publicly available
datasets for smartphone-based HAR in unsupervised, semi-supervised, and
transfer learning settings. Our method achieves performance levels superior to
or comparable with fully-supervised networks, and it performs significantly
better than autoencoders. Notably, for the semi-supervised case, the
self-supervised features substantially boost the detection rate by attaining a
kappa score between 0.7-0.8 with only 10 labeled examples per class. We get
similar impressive performance even if the features are transferred from a
different data source. While this paper focuses on HAR as the application
domain, the proposed technique is general and could be applied to a wide
variety of problems in other areas
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
apk2vec: Semi-supervised multi-view representation learning for profiling Android applications
Building behavior profiles of Android applications (apps) with holistic, rich
and multi-view information (e.g., incorporating several semantic views of an
app such as API sequences, system calls, etc.) would help catering downstream
analytics tasks such as app categorization, recommendation and malware analysis
significantly better. Towards this goal, we design a semi-supervised
Representation Learning (RL) framework named apk2vec to automatically generate
a compact representation (aka profile/embedding) for a given app. More
specifically, apk2vec has the three following unique characteristics which make
it an excellent choice for largescale app profiling: (1) it encompasses
information from multiple semantic views such as API sequences, permissions,
etc., (2) being a semi-supervised embedding technique, it can make use of
labels associated with apps (e.g., malware family or app category labels) to
build high quality app profiles, and (3) it combines RL and feature hashing
which allows it to efficiently build profiles of apps that stream over time
(i.e., online learning). The resulting semi-supervised multi-view hash
embeddings of apps could then be used for a wide variety of downstream tasks
such as the ones mentioned above. Our extensive evaluations with more than
42,000 apps demonstrate that apk2vec's app profiles could significantly
outperform state-of-the-art techniques in four app analytics tasks namely,
malware detection, familial clustering, app clone detection and app
recommendation.Comment: International Conference on Data Mining, 201
- …