1,045,753 research outputs found
A multi-task learning CNN for image steganalysis
Convolutional neural network (CNN) based image steganalysis are increasingly popular because of their superiority in accuracy. The most straightforward way to employ CNN for image steganalysis is to learn a CNN-based classifier to distinguish whether secret messages have been embedded into an image. However, it is difficult to learn such a classifier because of the weak stego signals and the limited useful information. To address this issue, in this paper, a multi-task learning CNN is proposed. In addition to the typical use of CNN, learning a CNN-based classifier for the whole image, our multi-task CNN is learned with an auxiliary task of the pixel binary classification, estimating whether each pixel in an image has been modified due to steganography. To the best of our knowledge, we are the first to employ CNN to perform the pixel-level classification of such type. Experimental results have justified the effectiveness and efficiency of the proposed multi-task learning CNN
Distributed Multi-Task Relationship Learning
Multi-task learning aims to learn multiple tasks jointly by exploiting their
relatedness to improve the generalization performance for each task.
Traditionally, to perform multi-task learning, one needs to centralize data
from all the tasks to a single machine. However, in many real-world
applications, data of different tasks may be geo-distributed over different
local machines. Due to heavy communication caused by transmitting the data and
the issue of data privacy and security, it is impossible to send data of
different task to a master machine to perform multi-task learning. Therefore,
in this paper, we propose a distributed multi-task learning framework that
simultaneously learns predictive models for each task as well as task
relationships between tasks alternatingly in the parameter server paradigm. In
our framework, we first offer a general dual form for a family of regularized
multi-task relationship learning methods. Subsequently, we propose a
communication-efficient primal-dual distributed optimization algorithm to solve
the dual problem by carefully designing local subproblems to make the dual
problem decomposable. Moreover, we provide a theoretical convergence analysis
for the proposed algorithm, which is specific for distributed multi-task
relationship learning. We conduct extensive experiments on both synthetic and
real-world datasets to evaluate our proposed framework in terms of
effectiveness and convergence.Comment: To appear in KDD 201
Self-Paced Multi-Task Learning
In this paper, we propose a novel multi-task learning (MTL) framework, called
Self-Paced Multi-Task Learning (SPMTL). Different from previous works treating
all tasks and instances equally when training, SPMTL attempts to jointly learn
the tasks by taking into consideration the complexities of both tasks and
instances. This is inspired by the cognitive process of human brain that often
learns from the easy to the hard. We construct a compact SPMTL formulation by
proposing a new task-oriented regularizer that can jointly prioritize the tasks
and the instances. Thus it can be interpreted as a self-paced learner for MTL.
A simple yet effective algorithm is designed for optimizing the proposed
objective function. An error bound for a simplified formulation is also
analyzed theoretically. Experimental results on toy and real-world datasets
demonstrate the effectiveness of the proposed approach, compared to the
state-of-the-art methods
Latent Multi-task Architecture Learning
Multi-task learning (MTL) allows deep neural networks to learn from related
tasks by sharing parameters with other networks. In practice, however, MTL
involves searching an enormous space of possible parameter sharing
architectures to find (a) the layers or subspaces that benefit from sharing,
(b) the appropriate amount of sharing, and (c) the appropriate relative weights
of the different task losses. Recent work has addressed each of the above
problems in isolation. In this work we present an approach that learns a latent
multi-task architecture that jointly addresses (a)--(c). We present experiments
on synthetic data and data from OntoNotes 5.0, including four different tasks
and seven different domains. Our extension consistently outperforms previous
approaches to learning latent architectures for multi-task problems and
achieves up to 15% average error reductions over common approaches to MTL.Comment: To appear in Proceedings of AAAI 201
- …