6 research outputs found
Simple, Efficient and Convenient Decentralized Multi-Task Learning for Neural Networks
Artificial intelligence relying on machine learning is increasingly used on small, personal, network-connected devices such as smartphones and vocal assistants, and these applications will likely evolve with the development of the Internet of Things. The learning process requires a lot of data, often real usersâ data, and computing power. Decentralized machine learning can help to protect usersâ privacy by keeping sensitive training data on usersâ devices, and has the potential to alleviate the cost born by service providers by off-loading some of the learning effort to user devices. Unfortunately, most approaches proposed so far for distributed learning with neural network are mono-task, and do not transfer easily to multi-tasks problems, for which users seek to solve related but distinct learning tasks and the few existing multi-task approaches have serious limitations. In this paper, we propose a novel learning method for neural networks that is decentralized, multitask, and keeps usersâ data local. Our approach works with different learning algorithms, on various types of neural networks. We formally analyze the convergence of our method, and we evaluateits efficiency in different situations on various kind of neural networks, with different learning algorithms, thus demonstrating its benefits in terms of learning quality and convergence
Fully Decentralized Joint Learning of Personalized Models and Collaboration Graphs
We consider the fully decentralized machine learning scenario where many users with personal datasets collaborate to learn models through local peer-to-peer exchanges , without a central coordinator. We propose to train personalized models that leverage a collaboration graph describing the relationships between the users' personal tasks, which we learn jointly with the models. Our fully decentralized optimization procedure alternates between training nonlinear models given the graph in a greedy boosting manner, and updating the collaboration graph (with controlled sparsity) given the models. Throughout the process, users exchange messages only with a small number of peers (their direct neighbors in the graph and a few random users), ensuring that the procedure naturally scales to large numbers of users. We analyze the convergence rate, memory and communication complexity of our approach, and demonstrate its benefits compared to competing techniques on synthetic and real datasets
Advances and Open Problems in Federated Learning
Federated learning (FL) is a machine learning setting where many clients
(e.g. mobile devices or whole organizations) collaboratively train a model
under the orchestration of a central server (e.g. service provider), while
keeping the training data decentralized. FL embodies the principles of focused
data collection and minimization, and can mitigate many of the systemic privacy
risks and costs resulting from traditional, centralized machine learning and
data science approaches. Motivated by the explosive growth in FL research, this
paper discusses recent advances and presents an extensive collection of open
problems and challenges.Comment: Published in Foundations and Trends in Machine Learning Vol 4 Issue
1. See: https://www.nowpublishers.com/article/Details/MAL-08
Advances and Open Problems in Federated Learning
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges