6,454 research outputs found
Federated Self-Supervised Learning of Multi-Sensor Representations for Embedded Intelligence
Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth
of data that cannot be accumulated in a centralized repository for learning
supervised models due to privacy, bandwidth limitations, and the prohibitive
cost of annotations. Federated learning provides a compelling framework for
learning models from decentralized data, but conventionally, it assumes the
availability of labeled samples, whereas on-device data are generally either
unlabeled or cannot be annotated readily through user interaction. To address
these issues, we propose a self-supervised approach termed
\textit{scalogram-signal correspondence learning} based on wavelet transform to
learn useful representations from unlabeled sensor inputs, such as
electroencephalography, blood volume pulse, accelerometer, and WiFi channel
state information. Our auxiliary task requires a deep temporal neural network
to determine if a given pair of a signal and its complementary viewpoint (i.e.,
a scalogram generated with a wavelet transform) align with each other or not
through optimizing a contrastive objective. We extensively assess the quality
of learned features with our multi-view strategy on diverse public datasets,
achieving strong performance in all domains. We demonstrate the effectiveness
of representations learned from an unlabeled input collection on downstream
tasks with training a linear classifier over pretrained network, usefulness in
low-data regime, transfer learning, and cross-validation. Our methodology
achieves competitive performance with fully-supervised networks, and it
outperforms pre-training with autoencoders in both central and federated
contexts. Notably, it improves the generalization in a semi-supervised setting
as it reduces the volume of labeled data required through leveraging
self-supervised learning.Comment: Accepted for publication at IEEE Internet of Things Journa
ARES: Adaptive Resource-Aware Split Learning for Internet of Things
Distributed training of Machine Learning models in edge Internet of Things (IoT) environments is challenging because of three main points. First, resource-constrained devices have large training times and limited energy budget. Second, resource heterogeneity of IoT devices slows down the training of the global model due to the presence of slower devices (stragglers). Finally, varying operational conditions, such as network bandwidth, and computing resources, significantly affect training time and energy consumption. Recent studies have proposed Split Learning (SL) for distributed model training with limited resources but its efficient implementation on the resource-constrained and decentralized heterogeneous IoT devices remains minimally explored. We propose Adaptive REsource-aware Splitlearning (ARES), a scheme for efficient model training in IoT systems. ARES accelerates local training in resource-constrained devices and minimizes the effect of stragglers on the training through device-targeted split points while accounting for time-varying network throughput and computing resources. ARES takes into account application constraints to mitigate training optimization tradeoffs in terms of energy consumption and training time. We evaluate ARES prototype on a real testbed comprising heterogeneous IoT devices running a widely-adopted deep neural network and dataset. Results show that ARES accelerates model training on IoT devices by up to 48% and minimizes the energy consumption by up to 61.4% compared to Federated Learning (FL) and classic SL, without sacrificing the model convergence and accurac
Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions
Sixth-generation (6G) networks anticipate intelligently supporting a wide
range of smart services and innovative applications. Such a context urges a
heavy usage of Machine Learning (ML) techniques, particularly Deep Learning
(DL), to foster innovation and ease the deployment of intelligent network
functions/operations, which are able to fulfill the various requirements of the
envisioned 6G services. Specifically, collaborative ML/DL consists of deploying
a set of distributed agents that collaboratively train learning models without
sharing their data, thus improving data privacy and reducing the
time/communication overhead. This work provides a comprehensive study on how
collaborative learning can be effectively deployed over 6G wireless networks.
In particular, our study focuses on Split Federated Learning (SFL), a technique
recently emerged promising better performance compared with existing
collaborative learning approaches. We first provide an overview of three
emerging collaborative learning paradigms, including federated learning, split
learning, and split federated learning, as well as of 6G networks along with
their main vision and timeline of key developments. We then highlight the need
for split federated learning towards the upcoming 6G networks in every aspect,
including 6G technologies (e.g., intelligent physical layer, intelligent edge
computing, zero-touch network management, intelligent resource management) and
6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous
systems). Furthermore, we review existing datasets along with frameworks that
can help in implementing SFL for 6G networks. We finally identify key technical
challenges, open issues, and future research directions related to SFL-enabled
6G networks
DFL: Dynamic Federated Split Learning in Heterogeneous IoT
Federated Learning (FL) in edge Internet of Things (IoT) environments is challenging due to the heterogeneous nature of the learning environment, mainly embodied in two aspects. Firstly, the statistically heterogeneous data, usually non-independent identically distributed (non-IID), from geographically distributed clients can deteriorate the FL training accuracy. Secondly, the heterogeneous computing and communication resources in IoT devices often result in unstable training processes that slow down the training of a global model and affect energy consumption. Most existing solutions address only the unilateral side of the heterogeneity issue but neglect the joint problem of resources and data heterogeneity for the resource-constrained IoT. In this article, we propose Dynamic Federated split Learning (DFL) to address the joint problem of data and resource heterogeneity for distributed training in IoT. DFL enhances training efficiency in heterogeneous dynamic IoT through resource-aware split computing of deep neural networks and dynamic clustering of training participants based on the similarity of their sub-model layers. We evaluate DFL on a real testbed comprising heterogeneous IoT devices using two widely-adopted datasets, in various non-IID settings. Results show that DFL improves training performance in terms of training time by up to 48%, accuracy by up to 32%, and energy consumption by up to 62.8% compared to classic FL and Federated Split Learning in scenarios with both data and resource heterogeneity
- …