27,634 research outputs found
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Convolutional neural networks have been widely deployed in various
application scenarios. In order to extend the applications' boundaries to some
accuracy-crucial domains, researchers have been investigating approaches to
boost accuracy through either deeper or wider network structures, which brings
with them the exponential increment of the computational and storage cost,
delaying the responding time. In this paper, we propose a general training
framework named self distillation, which notably enhances the performance
(accuracy) of convolutional neural networks through shrinking the size of the
network rather than aggrandizing it. Different from traditional knowledge
distillation - a knowledge transformation methodology among networks, which
forces student neural networks to approximate the softmax layer outputs of
pre-trained teacher neural networks, the proposed self distillation framework
distills knowledge within network itself. The networks are firstly divided into
several sections. Then the knowledge in the deeper portion of the networks is
squeezed into the shallow ones. Experiments further prove the generalization of
the proposed self distillation framework: enhancement of accuracy at average
level is 2.65%, varying from 0.61% in ResNeXt as minimum to 4.07% in VGG19 as
maximum. In addition, it can also provide flexibility of depth-wise scalable
inference on resource-limited edge devices.Our codes will be released on github
soon.Comment: 10page
Comparative Studies of Detecting Abusive Language on Twitter
The context-dependent nature of online aggression makes annotating large
collections of data extremely difficult. Previously studied datasets in abusive
language detection have been insufficient in size to efficiently train deep
learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much
greater in size and reliability, has been released. However, this dataset has
not been comprehensively studied to its potential. In this paper, we conduct
the first comparative study of various learning models on Hate and Abusive
Speech on Twitter, and discuss the possibility of using additional features and
context data for improvements. Experimental results show that bidirectional GRU
networks trained on word-level features, with Latent Topic Clustering modules,
is the most accurate model scoring 0.805 F1.Comment: ALW2: 2nd Workshop on Abusive Language Online to be held at EMNLP
2018 (Brussels, Belgium), October 31st, 201
Learning Privacy Preserving Encodings through Adversarial Training
We present a framework to learn privacy-preserving encodings of images that
inhibit inference of chosen private attributes, while allowing recovery of
other desirable information. Rather than simply inhibiting a given fixed
pre-trained estimator, our goal is that an estimator be unable to learn to
accurately predict the private attributes even with knowledge of the encoding
function. We use a natural adversarial optimization-based formulation for
this---training the encoding function against a classifier for the private
attribute, with both modeled as deep neural networks. The key contribution of
our work is a stable and convergent optimization approach that is successful at
learning an encoder with our desired properties---maintaining utility while
inhibiting inference of private attributes, not just within the adversarial
optimization, but also by classifiers that are trained after the encoder is
fixed. We adopt a rigorous experimental protocol for verification wherein
classifiers are trained exhaustively till saturation on the fixed encoders. We
evaluate our approach on tasks of real-world complexity---learning
high-dimensional encodings that inhibit detection of different scene
categories---and find that it yields encoders that are resilient at maintaining
privacy.Comment: To appear in WACV 201
Deep learning for time series classification: a review
Time Series Classification (TSC) is an important and challenging problem in
data mining. With the increase of time series data availability, hundreds of
TSC algorithms have been proposed. Among these methods, only a few have
considered Deep Neural Networks (DNNs) to perform this task. This is surprising
as deep learning has seen very successful applications in the last years. DNNs
have indeed revolutionized the field of computer vision especially with the
advent of novel deeper architectures such as Residual and Convolutional Neural
Networks. Apart from images, sequential data such as text and audio can also be
processed with DNNs to reach state-of-the-art performance for document
classification and speech recognition. In this article, we study the current
state-of-the-art performance of deep learning algorithms for TSC by presenting
an empirical study of the most recent DNN architectures for TSC. We give an
overview of the most successful deep learning applications in various time
series domains under a unified taxonomy of DNNs for TSC. We also provide an
open source deep learning framework to the TSC community where we implemented
each of the compared approaches and evaluated them on a univariate TSC
benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By
training 8,730 deep learning models on 97 time series datasets, we propose the
most exhaustive study of DNNs for TSC to date.Comment: Accepted at Data Mining and Knowledge Discover
- …