1,301,009 research outputs found
A multi-task learning CNN for image steganalysis
Convolutional neural network (CNN) based image steganalysis are increasingly popular because of their superiority in accuracy. The most straightforward way to employ CNN for image steganalysis is to learn a CNN-based classifier to distinguish whether secret messages have been embedded into an image. However, it is difficult to learn such a classifier because of the weak stego signals and the limited useful information. To address this issue, in this paper, a multi-task learning CNN is proposed. In addition to the typical use of CNN, learning a CNN-based classifier for the whole image, our multi-task CNN is learned with an auxiliary task of the pixel binary classification, estimating whether each pixel in an image has been modified due to steganography. To the best of our knowledge, we are the first to employ CNN to perform the pixel-level classification of such type. Experimental results have justified the effectiveness and efficiency of the proposed multi-task learning CNN
Task-Projected Hyperdimensional Computing for Multi-Task Learning
Brain-inspired Hyperdimensional (HD) computing is an emerging technique for
cognitive tasks in the field of low-power design. As a fast-learning and
energy-efficient computational paradigm, HD computing has shown great success
in many real-world applications. However, an HD model incrementally trained on
multiple tasks suffers from the negative impacts of catastrophic forgetting.
The model forgets the knowledge learned from previous tasks and only focuses on
the current one. To the best of our knowledge, no study has been conducted to
investigate the feasibility of applying multi-task learning to HD computing. In
this paper, we propose Task-Projected Hyperdimensional Computing (TP-HDC) to
make the HD model simultaneously support multiple tasks by exploiting the
redundant dimensionality in the hyperspace. To mitigate the interferences
between different tasks, we project each task into a separate subspace for
learning. Compared with the baseline method, our approach efficiently utilizes
the unused capacity in the hyperspace and shows a 12.8% improvement in averaged
accuracy with negligible memory overhead.Comment: To be published in 16th International Conference on Artificial
Intelligence Applications and Innovation
A Fused Elastic Net Logistic Regression Model for Multi-Task Binary Classification
Multi-task learning has shown to significantly enhance the performance of
multiple related learning tasks in a variety of situations. We present the
fused logistic regression, a sparse multi-task learning approach for binary
classification. Specifically, we introduce sparsity inducing penalties over
parameter differences of related logistic regression models to encode
similarity across related tasks. The resulting joint learning task is cast into
a form that lends itself to be efficiently optimized with a recursive variant
of the alternating direction method of multipliers. We show results on
synthetic data and describe the regime of settings where our multi-task
approach achieves significant improvements over the single task learning
approach and discuss the implications on applying the fused logistic regression
in different real world settings.Comment: 17 page
- …
