26,252 research outputs found
Active discriminative network representation learning
© 2018 International Joint Conferences on Artificial Intelligence. All right reserved. Most of current network representation models are learned in unsupervised fashions, which usually lack the capability of discrimination when applied to network analysis tasks, such as node classification. It is worth noting that label information is valuable for learning the discriminative network representations. However, labels of all training nodes are always difficult or expensive to obtain and manually labeling all nodes for training is inapplicable. Different sets of labeled nodes for model learning lead to different network representation results. In this paper, we propose a novel method, termed as ANRMAB, to learn the active discriminative network representations with a multi-armed bandit mechanism in active learning setting. Specifically, based on the networking data and the learned network representations, we design three active learning query strategies. By deriving an effective reward scheme that is closely related to the estimated performance measure of interest, ANRMAB uses a multi-armed bandit mechanism for adaptive decision making to select the most informative nodes for labeling. The updated labeled nodes are then used for further discriminative network representation learning. Experiments are conducted on three public data sets to verify the effectiveness of ANRMAB
Active Authentication using an Autoencoder regularized CNN-based One-Class Classifier
Active authentication refers to the process in which users are unobtrusively
monitored and authenticated continuously throughout their interactions with
mobile devices. Generally, an active authentication problem is modelled as a
one class classification problem due to the unavailability of data from the
impostor users. Normally, the enrolled user is considered as the target class
(genuine) and the unauthorized users are considered as unknown classes
(impostor). We propose a convolutional neural network (CNN) based approach for
one class classification in which a zero centered Gaussian noise and an
autoencoder are used to model the pseudo-negative class and to regularize the
network to learn meaningful feature representations for one class data,
respectively. The overall network is trained using a combination of the
cross-entropy and the reconstruction error losses. A key feature of the
proposed approach is that any pre-trained CNN can be used as the base network
for one class classification. Effectiveness of the proposed framework is
demonstrated using three publically available face-based active authentication
datasets and it is shown that the proposed method achieves superior performance
compared to the traditional one class classification methods. The source code
is available at: github.com/otkupjnoz/oc-acnn.Comment: Accepted and to appear at AFGR 201
Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis
The availability of large-scale annotated image datasets and recent advances
in supervised deep learning methods enable the end-to-end derivation of
representative image features that can impact a variety of image analysis
problems. Such supervised approaches, however, are difficult to implement in
the medical domain where large volumes of labelled data are difficult to obtain
due to the complexity of manual annotation and inter- and intra-observer
variability in label assignment. We propose a new convolutional sparse kernel
network (CSKN), which is a hierarchical unsupervised feature learning framework
that addresses the challenge of learning representative visual features in
medical image analysis domains where there is a lack of annotated training
data. Our framework has three contributions: (i) We extend kernel learning to
identify and represent invariant features across image sub-patches in an
unsupervised manner. (ii) We initialise our kernel learning with a layer-wise
pre-training scheme that leverages the sparsity inherent in medical images to
extract initial discriminative features. (iii) We adapt a multi-scale spatial
pyramid pooling (SPP) framework to capture subtle geometric differences between
learned visual features. We evaluated our framework in medical image retrieval
and classification on three public datasets. Our results show that our CSKN had
better accuracy when compared to other conventional unsupervised methods and
comparable accuracy to methods that used state-of-the-art supervised
convolutional neural networks (CNNs). Our findings indicate that our
unsupervised CSKN provides an opportunity to leverage unannotated big data in
medical imaging repositories.Comment: Accepted by Medical Image Analysis (with a new title 'Convolutional
Sparse Kernel Network for Unsupervised Medical Image Analysis'). The
manuscript is available from following link
(https://doi.org/10.1016/j.media.2019.06.005
Discriminative Recurrent Sparse Auto-Encoders
We present the discriminative recurrent sparse auto-encoder model, comprising
a recurrent encoder of rectified linear units, unrolled for a fixed number of
iterations, and connected to two linear decoders that reconstruct the input and
predict its supervised classification. Training via
backpropagation-through-time initially minimizes an unsupervised sparse
reconstruction error; the loss function is then augmented with a discriminative
term on the supervised classification. The depth implicit in the
temporally-unrolled form allows the system to exhibit all the power of deep
networks, while substantially reducing the number of trainable parameters.
From an initially unstructured network the hidden units differentiate into
categorical-units, each of which represents an input prototype with a
well-defined class; and part-units representing deformations of these
prototypes. The learned organization of the recurrent encoder is hierarchical:
part-units are driven directly by the input, whereas the activity of
categorical-units builds up over time through interactions with the part-units.
Even using a small number of hidden units per layer, discriminative recurrent
sparse auto-encoders achieve excellent performance on MNIST.Comment: Added clarifications suggested by reviewers. 15 pages, 10 figure
- …