278 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    A Very Brief Introduction to Machine Learning With Applications to Communication Systems

    Get PDF
    Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modelling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack

    Multiuser MIMO techniques with feedback

    Get PDF
    Kooperative Antennenanlagen haben vor kurzem einen heißen Forschungsthema geworden, da Sie deutlich höhere spektrale Effizienz als herkömmliche zelluläre Systeme versprechen. Der Gewinn wird durch die Eliminierung von Inter-Zelle Störungen (ICI) durch Koordinierung der-Antenne Übertragungen erworben. Vor kurzem, verteilte Organisation Methoden vorgeschlagen. Eine der größten Herausforderungen für das Dezentrale kooperative Antennensystem ist Kanalschätzung für den Downlink Kanal besonders wenn FDD verwendet wird. Alle zugehörigen Basisstationen im genossenschaftlichen Bereich müssen die vollständige Kanal Informationen zu Wissen, die entsprechenden precoding Gewicht Matrix zu berechnen. Diese Information ist von mobilen Stationen übertragen werden Stationen mit Uplink Ressourcen zu stützen. Wird als mehrere Basisstationen und mehreren mobilen Stationen in kooperativen Antennensysteme und jede Basisstation und Mobilstation beteiligt sind, können mit mehreren Antennen ausgestattet sein, die Anzahl der Kanal Parameter wieder gefüttert werden erwartet, groß zu sein. In dieser Arbeit wird ein effizientes Feedback Techniken der downlink Kanal Informationen sind für die Multi-user Multiple Input Multiple Output Fall vorgeschlagen, der insbesondere auf verteilte kooperative Antennensysteme zielt. Zuerst wird ein Unterraum-basiertes Kanalquantisierungsverfahren vorgeschlagen, das ein vorbestimmtes Codebuch verwendet. Ein iterativer Codebuchentwurfsalgorithmus wird vorgeschlagen, der zu einem lokalen optimalen Codebuch konvergiert. Darüber hinaus werden Feedback-Overhead-Reduktionsverfahren entwickelt, die die zeitliche Korrelation des Kanals ausnutzen. Es wird gezeigt, dass das vorgeschlagene adaptive Codebuchverfahren in Verbindung mit einem Datenkomprimierungsschema eine Leistung nahe an dem perfekten Kanalfall erzielt, was viel weniger Rückkopplungsoverhead im Vergleich zu anderen Techniken erfordert. Das auf dem Unterraum basierende Kanalquantisierungsverfahren wird erweitert, indem mehrere Antennen auf der Senderseite und/oder auf der Empfängerseite eingeführt werden, und die Leistung eines Vorcodierungs- (/Decodierungs-) Schemas mit regulierter Blockdiagonalisierung (RBD) wurde untersucht. Es wird ein kosteneffizientes Decodierungsmatrixquantisierungsverfahren vorgeschlagen, dass eine komplexe Berechnung an der Mobilstation vermeiden kann, während es nur eine leichte Verschlechterung zeigt. Die Arbeit wird abgeschlossen, indem die vorgeschlagenen Feedback-Methoden hinsichtlich ihrer Leistung, ihres erforderlichen Feedback-Overheads und ihrer Rechenkomplexität verglichen werden.Cooperative antenna systems have recently become a hot research topic, as they promise significantly higher spectral efficiency than conventional cellular systems. The gain is acquired by eliminating inter-cell interference (ICI) through coordination of the base antenna transmissions. Recently, distributed organization methods have been suggested. One of the main challenges of the distributed cooperative antenna system is channel estimation for the downlink channel especially when FDD is used. All of the associated base stations in the cooperative area need to know the full channel state information to calculate the corresponding precoding weight matrix. This information has to be transferred from mobile stations to base stations by using uplink resources. As several base stations and several mobile stations are involved in cooperative antenna systems and each base station and mobile station may be equipped with multiple antennas, the number of channel state parameters to be fed back is expected to be big. In this thesis, efficient feedback techniques of the downlink channel state information are proposed for the multi-user multiple-input multiple-output case, targeting distributed cooperative antenna systems in particular. First, a subspace based channel quantization method is proposed which employs a predefined codebook. An iterative codebook design algorithm is proposed which converges to a local optimum codebook. Furthermore, feedback overhead reduction methods are devised exploiting temporal correlation of the channel. It is shown that the proposed adaptive codebook method in conjunction with a data compression scheme achieves a performance close to the perfect channel case, requiring much less feedback overhead compared with other techniques. The subspace based channel quantization method is extended by introducing multiple antennas at the transmitter side and/or at the receiver side and the performance of a regularized block diagonalization (RBD) precoding(/decoding) scheme has been investigated as well as a zero-forcing (ZF) precoding scheme. A cost-efficient decoding matrix quantization method is proposed which can avoid a complex computation at the mobile station while showing only a slight degradation. The thesis is concluded by comparing the proposed feedback methods in terms of their performance, their required feedback overhead, and their computational complexity. The techniques that are developed in this thesis can be useful and applicable for 5G, which is envisioned to support the high granularity/resolution codebook and its efficient deployment schemes. Keywords: MU-MIMO, COOPA, limited feedback, CSI, CQ, feedback overhead reduction, Givens rotatio

    D4.2 Intelligent D-Band wireless systems and networks initial designs

    Get PDF
    This deliverable gives the results of the ARIADNE project's Task 4.2: Machine Learning based network intelligence. It presents the work conducted on various aspects of network management to deliver system level, qualitative solutions that leverage diverse machine learning techniques. The different chapters present system level, simulation and algorithmic models based on multi-agent reinforcement learning, deep reinforcement learning, learning automata for complex event forecasting, system level model for proactive handovers and resource allocation, model-driven deep learning-based channel estimation and feedbacks as well as strategies for deployment of machine learning based solutions. In short, the D4.2 provides results on promising AI and ML based methods along with their limitations and potentials that have been investigated in the ARIADNE project

    Federated Learning and Meta Learning:Approaches, Applications, and Directions

    Get PDF
    Over the past few years, significant advancements have been made in the field of machine learning (ML) to address resource management, interference management, autonomy, and decision-making in wireless networks. Traditional ML approaches rely on centralized methods, where data is collected at a central server for training. However, this approach poses a challenge in terms of preserving the data privacy of devices. To address this issue, federated learning (FL) has emerged as an effective solution that allows edge devices to collaboratively train ML models without compromising data privacy. In FL, local datasets are not shared, and the focus is on learning a global model for a specific task involving all devices. However, FL has limitations when it comes to adapting the model to devices with different data distributions. In such cases, meta learning is considered, as it enables the adaptation of learning models to different data distributions using only a few data samples. In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta). Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks. We also analyze the relationships among these learning algorithms and examine their advantages and disadvantages in real-world applications.</p

    Enabling AI in Future Wireless Networks: A Data Life Cycle Perspective

    Full text link
    Recent years have seen rapid deployment of mobile computing and Internet of Things (IoT) networks, which can be mostly attributed to the increasing communication and sensing capabilities of wireless systems. Big data analysis, pervasive computing, and eventually artificial intelligence (AI) are envisaged to be deployed on top of the IoT and create a new world featured by data-driven AI. In this context, a novel paradigm of merging AI and wireless communications, called Wireless AI that pushes AI frontiers to the network edge, is widely regarded as a key enabler for future intelligent network evolution. To this end, we present a comprehensive survey of the latest studies in wireless AI from the data-driven perspective. Specifically, we first propose a novel Wireless AI architecture that covers five key data-driven AI themes in wireless networks, including Sensing AI, Network Device AI, Access AI, User Device AI and Data-provenance AI. Then, for each data-driven AI theme, we present an overview on the use of AI approaches to solve the emerging data-related problems and show how AI can empower wireless network functionalities. Particularly, compared to the other related survey papers, we provide an in-depth discussion on the Wireless AI applications in various data-driven domains wherein AI proves extremely useful for wireless network design and optimization. Finally, research challenges and future visions are also discussed to spur further research in this promising area.Comment: Accepted at the IEEE Communications Surveys & Tutorials, 42 page

    Tensor-based signal processing with applications to MIMO-ODFM systems and intelligent reflecting surfaces

    Get PDF
    Der Einsatz von Tensor-Algebra-Techniken in der Signalverarbeitung hat in den letzten zwei Jahrzehnten zugenommen. Anwendungen wie Bildverarbeitung, biomedizinische Signalverarbeitung, radar, maschinelles Lernen, deep Learning und Kommunikation im Allgemeinen verwenden weitgehend tensorbasierte Verarbeitungstechniken zur Wiederherstellung, Schätzung und Klassifizierung von Signalen. Einer der Hauptgründe für den Einsatz der Tensorsignalverarbeitung ist die Ausnutzung der mehrdimensionalen Struktur von Signalen, wobei die Einzigartigkeitseigenschaften der Tensor-Zerlegung profitieren. Bei der drahtlosen Kommunikation beispielsweise können die Signale mehrere "Dimensionen" haben, wie Raum, Zeit, Frequenz, Polarisation, usw. Diese Arbeit ist in zwei Teile gegliedert. Im ersten Teil betrachten wir die Anwendung von Tensor-basierten Algorithmen für multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) Systeme unter Berücksichtigung von Vorhandensein von Phasenrauschenstörungen. In diesem Teil schlagen wir einen zweistufigen tensorbasierten Empfänger für eine gemeinsame Kanal-, Phasenrausch- und Datenschätzung in MIMO-OFDM-Systemen vor. In der ersten Stufe zeigen wir, dass das empfangene Signal auf den Pilotunterträgern als PARAFAC-Tensor dritter Ordnung modelliert werden kann. Auf der Grundlage dieses Modells werden zwei Algorithmen für die Schätzung der Phasen- und Kanalrauschen in den Pilotton vorgeschlagen. In der zweiten Stufe werden die übertragenen Daten geschätzt. Zu diesem Zweck schlagen wir einen Zero Forcing (ZF)-Empfänger vor, der sich die Tensorstruktur des empfangenen Signals auf den Datenträgern zunutze macht, indem er den vorgeschlagenen selektiven Kronecker-Produkt-Operators (SKP) kapitalisiert. Die Simulationsergebnisse zeigen, dass der vorgeschlagene Empfänger sowohl bei der Symbolfehlerrate als auch beim normalisierten mittleren quadratischen Fehler des geschätzten Kanal- und Phasenrauschmatrizen eine bessere Leistung im Vergleich zum Stand der Technik erzielt. Der zweite Teil dieser Arbeit befasst sich mit der Anwendung der Tensormodellierung zur Reduzierung des Kontrollsignalisierungsoverhead in zukünftigen drahtlosen Systemen, die durch intelligent reconfigurable surfaces (IRSs) unterstützt werden. Zu diesem Zweck schlagen wir eine Annäherung an die nahezu optimalen IRS-Phasenverschiebungen vor, die sonst einen prohibitiv hohen Kommunikationsoverhead auf den BS-IRS-Kontrollverbindungen verursachen würde. Die Hauptidee besteht darin, den optimalen Phasenvektor des IRSs, der Hunderte oder Tausende von Elementen haben kann, durch ein Tensormodell mit niedrigem Rang darzustellen. Dies wird erreicht durch Faktorisierung einer tensorisierten Version des IRS-Phasenverschiebungsvektors, wobei jede Komponente als Kronecker-Produkt einer vordefinierten Anzahl von Faktoren mit kleinerer Größe modelliert wird, die durch Tensor Zerlegungsalgorithmen erhaltet werden können. Wir zeigen, dass die vorgeschlagenen Low-Rank-Modelle die Rückkopplungsanforderungen für die BS-IRS-Kontrollverbindungen drastisch reduzieren. Die Simulationsergebnisse zeigen, dass die vorgeschlagene Methode besonders in Szenarien mit einer starken Sichtverbindung attraktiv sind. In diesem Fall wird fast die gleiche spektrale Effizienz erreicht wie in den Fällen mit nahezu optimalen Phasenverschiebungen, jedoch mit einem drastisch reduzierten Kommunikations-Overhead.The use of tensor algebra techniques in signal processing has been growing over the last two decades. Applications like image processing, biomedical signal processing, radar, machine/deep learning, and communications in general, largely employ tensor-based techniques for recovery, estimating, and classifying signals. One of the main reasons for using tensor signal processing is the exploitation of the multidimensional structure of signals, while benefiting from the uniqueness properties of tensor decomposition. For example, in wireless communications, the signals can have several “dimensions", e.g., space, time, frequency, polarization, beamspace, etc. This thesis is divided into two parts, first, in the application of a tensor-based algorithm in multiple-input multiple-output (MIMO)-orthogonal frequency division multiplexing (OFDM) systems with the presence of phase-noise impairments. In this first part, we propose a two-stage tensor-based receiver for a joint channel, phase-noise, and data estimation in MIMO-OFDM systems. In the first stage, we show that the received signal at the pilot subcarriers can be modeled as a third-order PARAFAC tensor. Based on this model, we propose two algorithms for channel and phase-noise estimation at the pilot subcarriers. The second stage consists of data estimation, for which we propose a ZF receiver that capitalizes on the tensor structure of the received signal at the data subcarriers using the proposed SKP operator. Numerical simulations show that the proposed receivers achieves an improved performance compared to the state-of-art receivers in terms of symbol error rate (SER) and normalized mean square error (NMSE) of the estimated channel and phase-noise matrices. The second part of this thesis focuses on the application of tensor modeling to reduce the control signaling overhead in future wireless systems aided by intelligent reconfigurable surfaces (IRS). To this end, we propose a low-rank approximation of the near-optimal IRS phase-shifts, which would incur prohibitively high communication overhead on the BS-IRS controller links. The key idea is to represent the potentially large IRS phase-shift vector using a low-rank tensor model. This is achieved by factorizing a tensorized version of the IRS phase-shift vector, where each component is modeled as the Kronecker product of a predefined number of factors of smaller sizes, which can be obtained via tensor decomposition algorithms. We show that the proposed low-rank models drastically reduce the required feedback requirements associated with the BS-IRS control links. Simulation results indicate that the proposed method is especially attractive in scenarios with a strong line of sight component, in which case nearly the same spectral efficiency is reached as in the cases with near-optimal phase-shifts, but with a drastically reduced communication overhead
    corecore