1,377 research outputs found
Vertical Federated Learning:A Structured Literature Review
Federated Learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy. With the growing interest in having collaboration among data owners, FL has gained significant attention of organizations. The idea of FL is to enable collaborating participants train machine learning (ML) models on decentralized data without breaching privacy. In simpler words, federated learning is the approach of ``bringing the model to the data, instead of bringing the data to the mode''. Federated learning, when applied to data which is partitioned vertically across participants, is able to build a complete ML model by combining local models trained only using the data with distinct features at the local sites. This architecture of FL is referred to as vertical federated learning (VFL), which differs from the conventional FL on horizontally partitioned data. As VFL is different from conventional FL, it comes with its own issues and challenges. In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL. Additionally, the literature review highlights the existing solutions to challenges in VFL and provides potential research directions in this domain
Vertical Federated Learning: A Structured Literature Review
Federated Learning (FL) has emerged as a promising distributed learning
paradigm with an added advantage of data privacy. With the growing interest in
having collaboration among data owners, FL has gained significant attention of
organizations. The idea of FL is to enable collaborating participants train
machine learning (ML) models on decentralized data without breaching privacy.
In simpler words, federated learning is the approach of ``bringing the model to
the data, instead of bringing the data to the mode''. Federated learning, when
applied to data which is partitioned vertically across participants, is able to
build a complete ML model by combining local models trained only using the data
with distinct features at the local sites. This architecture of FL is referred
to as vertical federated learning (VFL), which differs from the conventional FL
on horizontally partitioned data. As VFL is different from conventional FL, it
comes with its own issues and challenges. In this paper, we present a
structured literature review discussing the state-of-the-art approaches in VFL.
Additionally, the literature review highlights the existing solutions to
challenges in VFL and provides potential research directions in this domain
Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments
Decentralized systems are a subset of distributed systems where multiple
authorities control different components and no authority is fully trusted by
all. This implies that any component in a decentralized system is potentially
adversarial. We revise fifteen years of research on decentralization and
privacy, and provide an overview of key systems, as well as key insights for
designers of future systems. We show that decentralized designs can enhance
privacy, integrity, and availability but also require careful trade-offs in
terms of system complexity, properties provided, and degree of
decentralization. These trade-offs need to be understood and navigated by
designers. We argue that a combination of insights from cryptography,
distributed systems, and mechanism design, aligned with the development of
adequate incentives, are necessary to build scalable and successful
privacy-preserving decentralized systems
Federated learning in gaze recognition (FLIGR)
The efficiency and generalizability of a deep learning model is based on the amount and diversity of training data. Although huge amounts of data are being collected, these data are not stored in centralized servers for further data processing. It is often infeasible to collect and share data in centralized servers due to various medical data regulations. This need for diversely distributed data and infeasible storage solutions calls for Federated Learning (FL). FL is a clever way of utilizing privately stored data in model building without the need for data sharing. The idea is to train several different models locally with same architecture, share the model weights between the collaborators, aggregate the model weights and use the resulting global weights in furthering model building. FL is an iterative algorithm which repeats the above steps over defined number of rounds. By doing so, we negate the need for centralized data sharing and avoid several regulations tied to it. In this work, federated learning is applied to gaze recognition, a task to identify where the doctor’s gaze at. A global model is built by repeatedly aggregating local models built from 8 local institutional data using the FL algorithm for 4 federated rounds. The results show increase in the performance of the global model over federated rounds. The study also shows that the global model can be trained one more time locally at the end of FL on each institutional level to fine-tune the model to local data
- …