704 research outputs found
Trustworthy Edge Machine Learning: A Survey
The convergence of Edge Computing (EC) and Machine Learning (ML), known as
Edge Machine Learning (EML), has become a highly regarded research area by
utilizing distributed network resources to perform joint training and inference
in a cooperative manner. However, EML faces various challenges due to resource
constraints, heterogeneous network environments, and diverse service
requirements of different applications, which together affect the
trustworthiness of EML in the eyes of its stakeholders. This survey provides a
comprehensive summary of definitions, attributes, frameworks, techniques, and
solutions for trustworthy EML. Specifically, we first emphasize the importance
of trustworthy EML within the context of Sixth-Generation (6G) networks. We
then discuss the necessity of trustworthiness from the perspective of
challenges encountered during deployment and real-world application scenarios.
Subsequently, we provide a preliminary definition of trustworthy EML and
explore its key attributes. Following this, we introduce fundamental frameworks
and enabling technologies for trustworthy EML systems, and provide an in-depth
literature review of the latest solutions to enhance trustworthiness of EML.
Finally, we discuss corresponding research challenges and open issues.Comment: 27 pages, 7 figures, 10 table
Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments
Decentralized systems are a subset of distributed systems where multiple
authorities control different components and no authority is fully trusted by
all. This implies that any component in a decentralized system is potentially
adversarial. We revise fifteen years of research on decentralization and
privacy, and provide an overview of key systems, as well as key insights for
designers of future systems. We show that decentralized designs can enhance
privacy, integrity, and availability but also require careful trade-offs in
terms of system complexity, properties provided, and degree of
decentralization. These trade-offs need to be understood and navigated by
designers. We argue that a combination of insights from cryptography,
distributed systems, and mechanism design, aligned with the development of
adequate incentives, are necessary to build scalable and successful
privacy-preserving decentralized systems
Federated Learning in Intelligent Transportation Systems: Recent Applications and Open Problems
Intelligent transportation systems (ITSs) have been fueled by the rapid
development of communication technologies, sensor technologies, and the
Internet of Things (IoT). Nonetheless, due to the dynamic characteristics of
the vehicle networks, it is rather challenging to make timely and accurate
decisions of vehicle behaviors. Moreover, in the presence of mobile wireless
communications, the privacy and security of vehicle information are at constant
risk. In this context, a new paradigm is urgently needed for various
applications in dynamic vehicle environments. As a distributed machine learning
technology, federated learning (FL) has received extensive attention due to its
outstanding privacy protection properties and easy scalability. We conduct a
comprehensive survey of the latest developments in FL for ITS. Specifically, we
initially research the prevalent challenges in ITS and elucidate the
motivations for applying FL from various perspectives. Subsequently, we review
existing deployments of FL in ITS across various scenarios, and discuss
specific potential issues in object recognition, traffic management, and
service providing scenarios. Furthermore, we conduct a further analysis of the
new challenges introduced by FL deployment and the inherent limitations that FL
alone cannot fully address, including uneven data distribution, limited storage
and computing power, and potential privacy and security concerns. We then
examine the existing collaborative technologies that can help mitigate these
challenges. Lastly, we discuss the open challenges that remain to be addressed
in applying FL in ITS and propose several future research directions
Decentralized Federated Learning: Fundamentals, State-of-the-art, Frameworks, Trends, and Challenges
In the last decade, Federated Learning (FL) has gained relevance in training
collaborative models without sharing sensitive data. Since its birth,
Centralized FL (CFL) has been the most common approach in the literature, where
a central entity creates a global model. However, a centralized approach leads
to increased latency due to bottlenecks, heightened vulnerability to system
failures, and trustworthiness concerns affecting the entity responsible for the
global model creation. Decentralized Federated Learning (DFL) emerged to
address these concerns by promoting decentralized model aggregation and
minimizing reliance on centralized architectures. However, despite the work
done in DFL, the literature has not (i) studied the main aspects
differentiating DFL and CFL; (ii) analyzed DFL frameworks to create and
evaluate new solutions; and (iii) reviewed application scenarios using DFL.
Thus, this article identifies and analyzes the main fundamentals of DFL in
terms of federation architectures, topologies, communication mechanisms,
security approaches, and key performance indicators. Additionally, the paper at
hand explores existing mechanisms to optimize critical DFL fundamentals. Then,
the most relevant features of the current DFL frameworks are reviewed and
compared. After that, it analyzes the most used DFL application scenarios,
identifying solutions based on the fundamentals and frameworks previously
defined. Finally, the evolution of existing DFL solutions is studied to provide
a list of trends, lessons learned, and open challenges
Federated Learning for Connected and Automated Vehicles: A Survey of Existing Approaches and Challenges
Machine learning (ML) is widely used for key tasks in Connected and Automated
Vehicles (CAV), including perception, planning, and control. However, its
reliance on vehicular data for model training presents significant challenges
related to in-vehicle user privacy and communication overhead generated by
massive data volumes. Federated learning (FL) is a decentralized ML approach
that enables multiple vehicles to collaboratively develop models, broadening
learning from various driving environments, enhancing overall performance, and
simultaneously securing local vehicle data privacy and security. This survey
paper presents a review of the advancements made in the application of FL for
CAV (FL4CAV). First, centralized and decentralized frameworks of FL are
analyzed, highlighting their key characteristics and methodologies. Second,
diverse data sources, models, and data security techniques relevant to FL in
CAVs are reviewed, emphasizing their significance in ensuring privacy and
confidentiality. Third, specific and important applications of FL are explored,
providing insight into the base models and datasets employed for each
application. Finally, existing challenges for FL4CAV are listed and potential
directions for future work are discussed to further enhance the effectiveness
and efficiency of FL in the context of CAV
- …