977 research outputs found
Empirical Analysis of Privacy Preservation Models for Cyber Physical Deployments from a Pragmatic Perspective
The difficulty of privacy protection in cyber-physical installations encompasses several sectors and calls for methods like encryption, hashing, secure routing, obfuscation, and data exchange, among others. To create a privacy preservation model for cyber physical deployments, it is advised that data privacy, location privacy, temporal privacy, node privacy, route privacy, and other types of privacy be taken into account. Consideration must also be given to other types of privacy, such as temporal privacy. The computationally challenging process of incorporating these models into any wireless network also affects quality of service (QoS) variables including end-to-end latency, throughput, energy use, and packet delivery ratio. The best privacy models must be used by network designers and should have the least negative influence on these quality-of-service characteristics. The designers used common privacy models for the goal of protecting cyber-physical infrastructure in order to achieve this. The limitations of these installations' interconnection and interface-ability are not taken into account in this. As a result, even while network security has increased, the network's overall quality of service has dropped. The many state-of-the-art methods for preserving privacy in cyber-physical deployments without compromising their performance in terms of quality of service are examined and analyzed in this research. Lowering the likelihood that such circumstances might arise is the aim of this investigation and review. These models are rated according to how much privacy they provide, how long it takes from start to finish to transfer data, how much energy they use, and how fast their networks are. In order to maximize privacy while maintaining a high degree of service performance, the comparison will assist network designers and researchers in selecting the optimal models for their particular deployments. Additionally, the author of this book offers a variety of tactics that, when used together, might improve each reader's performance. This study also provides a range of tried-and-true machine learning approaches that networks may take into account and examine in order to enhance their privacy performance
Applications of Federated Learning in Smart Cities: Recent Advances, Taxonomy, and Open Challenges
Federated learning plays an important role in the process of smart cities.
With the development of big data and artificial intelligence, there is a
problem of data privacy protection in this process. Federated learning is
capable of solving this problem. This paper starts with the current
developments of federated learning and its applications in various fields. We
conduct a comprehensive investigation. This paper summarize the latest research
on the application of federated learning in various fields of smart cities.
In-depth understanding of the current development of federated learning from
the Internet of Things, transportation, communications, finance, medical and
other fields. Before that, we introduce the background, definition and key
technologies of federated learning. Further more, we review the key
technologies and the latest results. Finally, we discuss the future
applications and research directions of federated learning in smart cities
Federated Learning with Imbalanced and Agglomerated Data Distribution for Medical Image Classification
Federated learning (FL), training deep models from decentralized data without
privacy leakage, has drawn great attention recently. Two common issues in FL,
namely data heterogeneity from the local perspective and class imbalance from
the global perspective have limited FL's performance. These two coupling
problems are under-explored, and existing few studies may not be sufficiently
realistic to model data distributions in practical sceneries (e.g. medical
sceneries). One common observation is that the overall class distribution
across clients is imbalanced (e.g. common vs. rare diseases) and data tend to
be agglomerated to those more advanced clients (i.e., the data agglomeration
effect), which cannot be modeled by existing settings. Inspired by real medical
imaging datasets, we identify and formulate a new and more realistic data
distribution denoted as L2 distribution where global class distribution is
highly imbalanced and data distributions across clients are imbalanced but
forming a certain degree of data agglomeration. To pursue effective FL under
this distribution, we propose a novel privacy-preserving framework named FedIIC
that calibrates deep models to alleviate bias caused by imbalanced training. To
calibrate the feature extractor part, intra-client contrastive learning with a
modified similarity measure and inter-client contrastive learning guided by
shared global prototypes are introduced to produce a uniform embedding
distribution of all classes across clients. To calibrate the classification
heads, a softmax cross entropy loss with difficulty-aware logit adjustment is
constructed to ensure balanced decision boundaries of all classes. Experimental
results on publicly-available datasets demonstrate the superior performance of
FedIIC in dealing with both the proposed realistic modeling and the existing
modeling of the two coupling problems
Framework Construction of an Adversarial Federated Transfer Learning Classifier
As the Internet grows in popularity, more and more classification jobs, such
as IoT, finance industry and healthcare field, rely on mobile edge computing to
advance machine learning. In the medical industry, however, good diagnostic
accuracy necessitates the combination of large amounts of labeled data to train
the model, which is difficult and expensive to collect and risks jeopardizing
patients' privacy. In this paper, we offer a novel medical diagnostic framework
that employs a federated learning platform to ensure patient data privacy by
transferring classification algorithms acquired in a labeled domain to a domain
with sparse or missing labeled data. Rather than using a generative adversarial
network, our framework uses a discriminative model to build multiple
classification loss functions with the goal of improving diagnostic accuracy.
It also avoids the difficulty of collecting large amounts of labeled data or
the high cost of generating large amount of sample data. Experiments on
real-world image datasets demonstrates that the suggested adversarial federated
transfer learning method is promising for real-world medical diagnosis
applications that use image classification
CROSS-BORDER DATA TRANSFER REGULATION: A COMPARATIVE STUDY OF CHINA AND EUROPE
With the so-called Industry 4.0 revolution ongoing, end-to-end digitalisation of all assets and
integration into a digital ecosystem led the world to the unprecedented increases in connectivity
and global flows. Cross-border data flow has become the cornerstone of the cross-border
economy, especially for digital products. Without cross-border data flow, there will be no
transactions. As a result, governments have started updating the data-related policies, such as
restrictive measures for data cross-border flows or rules to mandate local data storage. Against
this background, this study focuses on emerging research topics, starting with contemporary
public policies on the cross-border data transfer.
The objective is to examine whether the policymakers from both regions could better
achieve their goals of promoting digital economy by establishing a mutual understanding with
the industrial entities, while maintaining the balance between the protection of personal
information and the innovation in digital markets. For that purpose, this research explores the
historical development of data transfer regulatory measures in China, the EU and the U.S.,
studied the specific challenges they are encountering in the data globalisation era.
Part I studied the evolvement of the CBDT rules. It is pointed out that the CBDT
regulation is a technology-led phenomenon yet not novel. It is an emerging threat to privacy
posed by the development of technology, thus attracted the scrutiny from the public and the
authorities. The CBDT regulation reflects the enforcement of national jurisdiction in the
cyberspace, which does not enjoy an indisputable general consensus in the contemporary
international law. The rulemaking of CBDT cannot avoid the controversial debate over the
legitimacy of state supervision of the network. CBDT regulation is originated from the
protection of personal data in the EU, yet the disagreement with regard to its philosophy is
derived from the conflict of different legislative values, that is, different legislators have
different understandings of the freedom of free flow of information and the right to personal
information. The author also questioned the rationale of the EU data transfer rules by
discussing the target validity of the current rules, that is, the target validity for data protection.
Part II compared the EU and China\u2019s data protection laws as well as the CBDT rules
respectively. Challenges that CBDT restriction measures might face are listed, since the data
transborder transmission is not a legislative measure by nature. In the process of rulemaking
and implementation existed dual pressures from domestic and abroad, categorised as
technological, international legislative and theoretical challenges. Theoretically, Cyberspace
does not have a boundary similar to a physical space, the theoretical premise that the EU CBDT
rules ignored is that the state must control the transborder transmission of data by setting the
borders. Thus, for China, two aspects must be addressed: is there an independent cyberspace
law, and where is the boundary between the virtual and real world. International legislative
challenges arise from the oversea data access of the U.S. government. The EU CBDT
framework has limited impact when facing such data access under the cover of FISA and
CLOUD Act of the U.S. Particularly, this dissertation discussed the potentials for a free flow
of data transfer mechanism between the EU and China. It is worth exploring the possibility for
a region-based bilateral collaboration, such as a free trade zone in China, to seek for the EU
Commission\u2019s recognition of adequate level of protection of personal information. For general
data-intensive entities, binding corporate rules and standard contractual clauses are still the
preferrable approaches.
Part III examines the data protection implementation and data transfer compliance in
the context of the HEART project. By analysing the use-cases the HEART deployed, as well
as the architecture that it proposed, Chapter 6 studies the privacy-enhancing measures from
both the organisational and technical perspectives. Specifically, the data classification system
and dynamic data security assessments are proposed. Chapter 7 studied the use case of
federated recommender system within the HEART platform and its potentials for the
promotion of GDPR compliance. The recommender system is thoroughly analysed under the
requirements of the GDPR, including the fundamental data processing principles and threat
assessment within the data processing
Federated Learning as a Solution for Problems Related to Intergovernmental Data Sharing
To address global problems, intergovernmental collaboration is needed. Modern solutions to these problems often include data-driven methods like artificial intelligence (AI), which require large amounts of data to perform well. However, data sharing between governments is limited. A possible solution is federated learning (FL), a decentralised AI method created to utilise personal information on edge devices. Instead of sharing data, governments can build their own models and just share the model parameters with a centralised server aggregating all parameters, resulting in a superior overall model. By conducting a structured literature review, we show how major intergovernmental data sharing challenges like disincentives, legal and ethical issues as well as technical constraints can be solved through FL. Enhanced AI while maintaining privacy through FL thus allows governments to collaboratively address global problems, which will positively impact governments and citizens
FedDrive: Generalizing Federated Learning to Semantic Segmentation in Autonomous Driving
Semantic Segmentation is essential to make self-driving vehicles autonomous,
enabling them to understand their surroundings by assigning individual pixels
to known categories. However, it operates on sensible data collected from the
users' cars; thus, protecting the clients' privacy becomes a primary concern.
For similar reasons, Federated Learning has been recently introduced as a new
machine learning paradigm aiming to learn a global model while preserving
privacy and leveraging data on millions of remote devices. Despite several
efforts on this topic, no work has explicitly addressed the challenges of
federated learning in semantic segmentation for driving so far. To fill this
gap, we propose FedDrive, a new benchmark consisting of three settings and two
datasets, incorporating the real-world challenges of statistical heterogeneity
and domain generalization. We benchmark state-of-the-art algorithms from the
federated learning literature through an in-depth analysis, combining them with
style transfer methods to improve their generalization ability. We demonstrate
that correctly handling normalization statistics is crucial to deal with the
aforementioned challenges. Furthermore, style transfer improves performance
when dealing with significant appearance shifts. Official website:
https://feddrive.github.io
Deep Anatomical Federated Network (Dafne): an open client/server framework for the continuous collaborative improvement of deep-learning-based medical image segmentation
Semantic segmentation is a crucial step to extract quantitative information
from medical (and, specifically, radiological) images to aid the diagnostic
process, clinical follow-up. and to generate biomarkers for clinical research.
In recent years, machine learning algorithms have become the primary tool for
this task. However, its real-world performance is heavily reliant on the
comprehensiveness of training data. Dafne is the first decentralized,
collaborative solution that implements continuously evolving deep learning
models exploiting the collective knowledge of the users of the system. In the
Dafne workflow, the result of each automated segmentation is refined by the
user through an integrated interface, so that the new information is used to
continuously expand the training pool via federated incremental learning. The
models deployed through Dafne are able to improve their performance over time
and to generalize to data types not seen in the training sets, thus becoming a
viable and practical solution for real-life medical segmentation tasks.Comment: 10 pages (main body), 5 figures. Work partially presented at the 2021
RSNA conference and at the 2023 ISMRM conference In this new version: added
author and change in the acknowledgmen
OpenFed: A Comprehensive and Versatile Open-Source Federated Learning Framework
Recent developments in Artificial Intelligence techniques have enabled their
successful application across a spectrum of commercial and industrial settings.
However, these techniques require large volumes of data to be aggregated in a
centralized manner, forestalling their applicability to scenarios wherein the
data is sensitive or the cost of data transmission is prohibitive. Federated
Learning alleviates these problems by decentralizing model training, thereby
removing the need for data transfer and aggregation. To advance the adoption of
Federated Learning, more research and development needs to be conducted to
address some important open questions. In this work, we propose OpenFed, an
open-source software framework for end-to-end Federated Learning. OpenFed
reduces the barrier to entry for both researchers and downstream users of
Federated Learning by the targeted removal of existing pain points. For
researchers, OpenFed provides a framework wherein new methods can be easily
implemented and fairly evaluated against an extensive suite of benchmarks. For
downstream users, OpenFed allows Federated Learning to be plug and play within
different subject-matter contexts, removing the need for deep expertise in
Federated Learning.Comment: 18 pages, 3 figures, 1 tabl
- …