43 research outputs found

    Federated Learning in Wireless Networks

    Get PDF
    Artificial intelligence (AI) is transitioning from a long development period into reality. Notable instances like AlphaGo, Tesla’s self-driving cars, and the recent innovation of ChatGPT stand as widely recognized exemplars of AI applications. These examples collectively enhance the quality of human life. An increasing number of AI applications are expected to integrate seamlessly into our daily lives, further enriching our experiences. Although AI has demonstrated remarkable performance, it is accompanied by numerous challenges. At the forefront of AI’s advancement lies machine learning (ML), a cutting-edge technique that acquires knowledge by emulating the human brain’s cognitive processes. Like humans, ML requires a substantial amount of data to build its knowledge repository. Computational capabilities have surged in alignment with Moore’s law, leading to the realization of cloud computing services like Amazon AWS. Presently, we find ourselves in the era of the IoT, characterized by the ubiquitous presence of smartphones, smart speakers, and intelligent vehicles. This landscape facilitates decentralizing data processing tasks, shifting them from the cloud to local devices. At the same time, a growing emphasis on privacy protection has emerged, as individuals are increasingly concerned with sharing personal data with corporate giants such as Google and Meta. Federated learning (FL) is a new distributed machine learning paradigm. It fosters a scenario where clients collaborate by sharing learned models rather than raw data, thus safeguarding client data privacy while providing a collaborative and resilient model. FL has promised to address privacy concerns. However, it still faces many challenges, particularly within wireless networks. Within the FL landscape, four main challenges stand out: high communication costs, system heterogeneity, statistical heterogeneity, and privacy and security. When many clients participate in the learning process, and the wireless communication resources remain constrained, accommodating all participating clients becomes very complex. The contemporary realm of deep learning relies on models encompassing millions and, in some cases, billions of parameters, exacerbating communication overhead when transmitting these parameters. The heterogeneity of the system manifests itself across device disparities, deployment scenarios, and connectivity capabilities. Simultaneously, statistical heterogeneity encompasses variations in data distribution and model composition. Furthermore, the distributed architecture makes FL susceptible to attacks inside and outside the system. This dissertation presents a suite of algorithms designed to address the challenges effectively. Mew communication schemes are introduced, including Non-Orthogonal Multiple Access (NOMA), over-the-air computation, and approximate communication. These techniques are coupled with gradient compression, client scheduling, and power allocation, each significantly mitigating communication overhead. Implementing asynchronous FL is a suitable remedy to solve the intricate issue of system heterogeneity. Independent and identically distributed (IID) and non-IID data in statistical heterogeneity are considered in all scenarios. Finally, the aggregation of model updates and individual client model initialization collaboratively address security and privacy issues

    SlimFL: Federated Learning with Superposition Coding over Slimmable Neural Networks

    Full text link
    Federated learning (FL) is a key enabler for efficient communication and computing, leveraging devices' distributed computing capabilities. However, applying FL in practice is challenging due to the local devices' heterogeneous energy, wireless channel conditions, and non-independently and identically distributed (non-IID) data distributions. To cope with these issues, this paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNN). Integrating FL with SNNs is challenging due to time-varying channel conditions and data distributions. In addition, existing multi-width SNN training algorithms are sensitive to the data distributions across devices, which makes SNN ill-suited for FL. Motivated by this, we propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models. By applying SC, SlimFL exchanges the superposition of multiple-width configurations decoded as many times as possible for a given communication throughput. Leveraging ST, SlimFL aligns the forward propagation of different width configurations while avoiding inter-width interference during backpropagation. We formally prove the convergence of SlimFL. The result reveals that SlimFL is not only communication-efficient but also deals with non-IID data distributions and poor channel conditions, which is also corroborated by data-intensive simulations

    Device Scheduling for Relay-assisted Over-the-Air Aggregation in Federated Learning

    Full text link
    Federated learning (FL) leverages data distributed at the edge of the network to enable intelligent applications. The efficiency of FL can be improved by using over-the-air computation (AirComp) technology in the process of gradient aggregation. In this paper, we propose a relay-assisted large-scale FL framework, and investigate the device scheduling problem in relay-assisted FL systems under the constraints of power consumption and mean squared error (MSE). we formulate a joint device scheduling, and power allocation problem to maximize the number of scheduled devices. We solve the resultant non-convex optimization problem by transforming the optimization problem into multiple sparse optimization problems. By the proposed device scheduling algorithm, these sparse sub-problems are solved and the maximum number of federated learning edge devices is obtained. The simulation results demonstrate the effectiveness of the proposed scheme as compared with other benchmark schemes

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&

    Deep Reinforcement Learning for Efficient Uplink NOMA SWIPT Transmissions

    Get PDF
    A key rival technology in radio access strategies for next generation cellular communications is non-orthogonal multiple access (NOMA) due to its enhanced performance compared to existing multiple access techniques such as orthogonal frequency division multiple access (OFDMA). The work in this thesis proposes a framework for an energy efficient system geared towards wireless exchange of intensive data collected from distributed Internet of things (IoT) sensor nodes connected to an edge node acting as a cluster head (CH). The IoT nodes utilize an adaptive compression model as an extra degree of freedom to control the transmitted rate going to the CH. The CH is an energy constrained node and may be battery operated. The CH is capable of radio frequency (RF) energy harvesting (EH) using simultaneous wireless power transfer (SWIPT). The proposed framework exploits deep reinforcement learning (DRL) mechanisms to achieve smart and efficient energy constrained up-link NOMA transmissions in IoT applications requiring data compression. In particular, the DRL maximizes the harvested energy at the CH while enforcing the data compression ratio constraints at the transmitting nodes and satisfying the outage probability constraints at the CH. The data compression in this type of sensor networks is vital in order to minimize the power consumption of the different sensors (transmitting nodes), which increases its service lifetime

    A Joint Learning and Communications Framework for Federated Learning over Wireless Networks

    Full text link
    In this paper, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In particular, in the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that will generate a global FL model and send it back to the users. Since all training parameters are transmitted over wireless links, the quality of the training will be affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS must select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To address this problem, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can reduce the FL loss function value by up to 10% and 16%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation and 2) a standard FL algorithm with random user selection and resource allocation.Comment: This paper has been accepted by IEEE Transactions on Wireless Communication
    corecore