13,511 research outputs found
Distributed variance regularized Multitask Learning
Past research on Multitask Learning (MTL) has focused mainly on devising adequate regularizers and less on their scalability. In this paper, we present a method to scale up MTL methods which penalize the variance of the task weight vectors. The method builds upon the alternating direction method of multipliers to decouple the variance regularizer. It can be efficiently implemented by a distributed algorithm, in which the tasks are first independently solved and subsequently corrected to pool information from other tasks. We show that the method works well in practice and convergences in few distributed iterations. Furthermore, we empirically observe that the number of iterations is nearly independent of the number of tasks, yielding a computational gain of O(T) over standard solvers. We also present experiments on a large URL classification dataset, which is challenging both in terms of volume of data points and dimensionality. Our results confirm that MTL can obtain superior performance over either learning a common model or independent task learning
Adaptation and learning over networks for nonlinear system modeling
In this chapter, we analyze nonlinear filtering problems in distributed
environments, e.g., sensor networks or peer-to-peer protocols. In these
scenarios, the agents in the environment receive measurements in a streaming
fashion, and they are required to estimate a common (nonlinear) model by
alternating local computations and communications with their neighbors. We
focus on the important distinction between single-task problems, where the
underlying model is common to all agents, and multitask problems, where each
agent might converge to a different model due to, e.g., spatial dependencies or
other factors. Currently, most of the literature on distributed learning in the
nonlinear case has focused on the single-task case, which may be a strong
limitation in real-world scenarios. After introducing the problem and reviewing
the existing approaches, we describe a simple kernel-based algorithm tailored
for the multitask case. We evaluate the proposal on a simulated benchmark task,
and we conclude by detailing currently open problems and lines of research.Comment: To be published as a chapter in `Adaptive Learning Methods for
Nonlinear System Modeling', Elsevier Publishing, Eds. D. Comminiello and J.C.
Principe (2018
Multitask Diffusion Adaptation over Networks
Adaptive networks are suitable for decentralized inference tasks, e.g., to
monitor complex natural phenomena. Recent research works have intensively
studied distributed optimization problems in the case where the nodes have to
estimate a single optimum parameter vector collaboratively. However, there are
many important applications that are multitask-oriented in the sense that there
are multiple optimum parameter vectors to be inferred simultaneously, in a
collaborative manner, over the area covered by the network. In this paper, we
employ diffusion strategies to develop distributed algorithms that address
multitask problems by minimizing an appropriate mean-square error criterion
with -regularization. The stability and convergence of the algorithm in
the mean and in the mean-square sense is analyzed. Simulations are conducted to
verify the theoretical findings, and to illustrate how the distributed strategy
can be used in several useful applications related to spectral sensing, target
localization, and hyperspectral data unmixing.Comment: 29 pages, 11 figures, submitted for publicatio
Proximal Multitask Learning over Networks with Sparsity-inducing Coregularization
In this work, we consider multitask learning problems where clusters of nodes
are interested in estimating their own parameter vector. Cooperation among
clusters is beneficial when the optimal models of adjacent clusters have a good
number of similar entries. We propose a fully distributed algorithm for solving
this problem. The approach relies on minimizing a global mean-square error
criterion regularized by non-differentiable terms to promote cooperation among
neighboring clusters. A general diffusion forward-backward splitting strategy
is introduced. Then, it is specialized to the case of sparsity promoting
regularizers. A closed-form expression for the proximal operator of a weighted
sum of -norms is derived to achieve higher efficiency. We also provide
conditions on the step-sizes that ensure convergence of the algorithm in the
mean and mean-square error sense. Simulations are conducted to illustrate the
effectiveness of the strategy
A Multitask Diffusion Strategy with Optimized Inter-Cluster Cooperation
We consider a multitask estimation problem where nodes in a network are
divided into several connected clusters, with each cluster performing a
least-mean-squares estimation of a different random parameter vector. Inspired
by the adapt-then-combine diffusion strategy, we propose a multitask diffusion
strategy whose mean stability can be ensured whenever individual nodes are
stable in the mean, regardless of the inter-cluster cooperation weights. In
addition, the proposed strategy is able to achieve an asymptotically unbiased
estimation, when the parameters have same mean. We also develop an
inter-cluster cooperation weights selection scheme that allows each node in the
network to locally optimize its inter-cluster cooperation weights. Numerical
results demonstrate that our approach leads to a lower average steady-state
network mean-square deviation, compared with using weights selected by various
other commonly adopted methods in the literature.Comment: 30 pages, 8 figures, submitted to IEEE Journal of Selected Topics in
Signal Processin
ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
We propose a new paradigm to continually evolve pretrained models, denoted
ColD Fusion. It provides the benefits of multitask learning but leverages
distributed computation with limited communication and eliminates the need for
shared data. Consequentially, ColD Fusion can give rise to a synergistic loop,
where finetuned models can be recycled to continually improve the pretrained
model they are based upon. We show that ColD Fusion yields comparable benefits
to multitask training by producing a model that (a) attains strong performance
on all of the datasets it was trained on; and (b) is a better starting point
for finetuning on unseen datasets. We show that ColD Fusion outperforms RoBERTa
and even previous multitask models. Specifically, when training and testing on
35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.33 points
on average without any changes to the architecture.Comment: ACL 2
Diffusion LMS for clustered multitask networks
Recent research works on distributed adaptive networks have intensively
studied the case where the nodes estimate a common parameter vector
collaboratively. However, there are many applications that are
multitask-oriented in the sense that there are multiple parameter vectors that
need to be inferred simultaneously. In this paper, we employ diffusion
strategies to develop distributed algorithms that address clustered multitask
problems by minimizing an appropriate mean-square error criterion with
-regularization. Some results on the mean-square stability and
convergence of the algorithm are also provided. Simulations are conducted to
illustrate the theoretical findings.Comment: 5 pages, 6 figures, submitted to ICASSP 201
- …