419,387 research outputs found
Distributed Multi-Task Relationship Learning
Multi-task learning aims to learn multiple tasks jointly by exploiting their
relatedness to improve the generalization performance for each task.
Traditionally, to perform multi-task learning, one needs to centralize data
from all the tasks to a single machine. However, in many real-world
applications, data of different tasks may be geo-distributed over different
local machines. Due to heavy communication caused by transmitting the data and
the issue of data privacy and security, it is impossible to send data of
different task to a master machine to perform multi-task learning. Therefore,
in this paper, we propose a distributed multi-task learning framework that
simultaneously learns predictive models for each task as well as task
relationships between tasks alternatingly in the parameter server paradigm. In
our framework, we first offer a general dual form for a family of regularized
multi-task relationship learning methods. Subsequently, we propose a
communication-efficient primal-dual distributed optimization algorithm to solve
the dual problem by carefully designing local subproblems to make the dual
problem decomposable. Moreover, we provide a theoretical convergence analysis
for the proposed algorithm, which is specific for distributed multi-task
relationship learning. We conduct extensive experiments on both synthetic and
real-world datasets to evaluate our proposed framework in terms of
effectiveness and convergence.Comment: To appear in KDD 201
RHFedMTL: Resource-Aware Hierarchical Federated Multi-Task Learning
The rapid development of artificial intelligence (AI) over massive
applications including Internet-of-things on cellular network raises the
concern of technical challenges such as privacy, heterogeneity and resource
efficiency.
Federated learning is an effective way to enable AI over massive distributed
nodes with security.
However, conventional works mostly focus on learning a single global model
for a unique task across the network, and are generally less competent to
handle multi-task learning (MTL) scenarios with stragglers at the expense of
acceptable computation and communication cost. Meanwhile, it is challenging to
ensure the privacy while maintain a coupled multi-task learning across multiple
base stations (BSs) and terminals. In this paper, inspired by the natural
cloud-BS-terminal hierarchy of cellular works, we provide a viable
resource-aware hierarchical federated MTL (RHFedMTL) solution to meet the
heterogeneity of tasks, by solving different tasks within the BSs and
aggregating the multi-task result in the cloud without compromising the
privacy. Specifically, a primal-dual method has been leveraged to effectively
transform the coupled MTL into some local optimization sub-problems within BSs.
Furthermore, compared with existing methods to reduce resource cost by simply
changing the aggregation frequency,
we dive into the intricate relationship between resource consumption and
learning accuracy, and develop a resource-aware learning strategy for local
terminals and BSs to meet the resource budget. Extensive simulation results
demonstrate the effectiveness and superiority of RHFedMTL in terms of improving
the learning accuracy and boosting the convergence rate.Comment: 11 pages, 8 figure
Learning in Immune Network Algorithm for Multi-Robot Cooperation
The multi-robot system frequently associated with the problem of robot coordination and cooperation as it requires real-time and distributed control. This paper describes biological immune system, immune response, and immune learning through somatic hypermutation. The relationship between immune system and multi-robot system is presented to show the connection between both systems. To improve the cooperative behavior in multi-robot systems, an immune network algorithm is proposed with the extension of learning ability. Jerne and Farmer models of immune network are referred as the foundation of our approach. The proposed algorithm is based on our previous conceptual model and designed particularly for multi-robots foraging task with five different action strategies. The learning concept in the antibody is applied to the robot action. Therefore, the robot swarm is expected to complete the task faster since robots adapt to the environment. For future work, the proposed algorithm will be implemented in a robot simulation environment called ARGoS
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
A lot of the recent success in natural language processing (NLP) has been
driven by distributed vector representations of words trained on large amounts
of text in an unsupervised manner. These representations are typically used as
general purpose features for words across a range of NLP problems. However,
extending this success to learning representations of sequences of words, such
as sentences, remains an open problem. Recent work has explored unsupervised as
well as supervised learning techniques with different training objectives to
learn general purpose fixed-length sentence representations. In this work, we
present a simple, effective multi-task learning framework for sentence
representations that combines the inductive biases of diverse training
objectives in a single model. We train this model on several data sources with
multiple training objectives on over 100 million sentences. Extensive
experiments demonstrate that sharing a single recurrent sentence encoder across
weakly related tasks leads to consistent improvements over previous methods. We
present substantial improvements in the context of transfer learning and
low-resource settings using our learned general-purpose representations.Comment: Accepted at ICLR 201
A Generative Model of Words and Relationships from Multiple Sources
Neural language models are a powerful tool to embed words into semantic
vector spaces. However, learning such models generally relies on the
availability of abundant and diverse training examples. In highly specialised
domains this requirement may not be met due to difficulties in obtaining a
large corpus, or the limited range of expression in average use. Such domains
may encode prior knowledge about entities in a knowledge base or ontology. We
propose a generative model which integrates evidence from diverse data sources,
enabling the sharing of semantic information. We achieve this by generalising
the concept of co-occurrence from distributional semantics to include other
relationships between entities or words, which we model as affine
transformations on the embedding space. We demonstrate the effectiveness of
this approach by outperforming recent models on a link prediction task and
demonstrating its ability to profit from partially or fully unobserved data
training labels. We further demonstrate the usefulness of learning from
different data sources with overlapping vocabularies.Comment: 8 pages, 5 figures; incorporated feedback from reviewers; to appear
in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence
201
- …