23 research outputs found

    Power control with Machine Learning Techniques in Massive MIMO cellular and cell-free systems

    Get PDF
    This PhD thesis presents a comprehensive investigation into power control (PC) optimization in cellular (CL) and cell-free (CF) massive multiple-input multiple-output (mMIMO) systems using machine learning (ML) techniques. The primary focus is on enhancing the sum spectral efficiency (SE) of these systems by leveraging various ML methods. To begin with, it is combined and extended two existing datasets, resulting in a unique dataset tailored for this research. The weighted minimum mean square error (WMMSE) method, a popular heuristic approach, is utilized as the baseline method for addressing the sum SE maximization problem. It is compared the performance of the WMMSE method with the deep Q-network (DQN) method through training on the complete dataset in both CL and CF-mMIMO systems. Furthermore, the PC problem in CL/CF-mMIMO systems is effectively tackled through the application of ML-based algorithms. These algorithms present highly efficient solutions with significantly reduced computational complexity [3]. Several ML methods are proposed for CL/CF-mMIMO systems, tailored explicitly to address the PC problem in CL/CF-mMIMO systems. Among them are the innovative proposed Fuzzy/DQN method, proposed DNN/GA method, proposed support vector machine (SVM) method, proposed SVM/RBF method, proposed decision tree (DT) method, proposed K-nearest neighbour (KNN) method, proposed linear regression (LR) method, and the novel proposed fusion scheme. The fusion schemes expertly combine multiple ML methods, such as system model 1 (DNN, DNN/GA, DQN, fuzzy/DQN, and SVM algorithms) and system model 2 (DNN, SVM-RBF, DQL, LR, KNN, and DT algorithms), which are thoroughly evaluated to maximize the sum spectral efficiency (SE), offering a viable alternative to computationally intensive heuristic algorithms. Subsequently, the DNN method is singled out for its exceptional performance and is further subjected to in-depth analysis. Each of the ML methods is trained on a merged dataset to extract a novel feature vector, and their respective performances are meticulously compared against the WMMSE method in the context of CL/CF-mMIMO systems. This research promises to pave the way for more robust and efficient PC solutions, ensuring enhanced SE and ultimately advancing the field of CL/CF-mMIMO systems. The results reveal that the DNN method outperforms the other ML methods in terms of sum SE, while exhibiting significantly lower computational complexity compared to the WMMSE algorithm. Therefore, the DNN method is chosen for examining its transferability across two datasets (dataset A and B) based on their shared common features. Three scenarios are devised for the transfer learning method, involving the training of the DNN method on dataset B (S1), the utilization of model A and dataset B (S2), and the retraining of model A on dataset B (S3). These scenarios are evaluated to assess the effectiveness of the transfer learning approach. Furthermore, three different setups for the DNN architecture (DNN1, DNN2, and DNN3) are employed and compared to the WMMSE method based on performance metrics such as mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE). Moreover, the research evaluates the impact of the number of base stations (BSs), access points (APs), and users on PC in CL/CF-mMIMO systems using ML methodology. Datasets capturing diverse scenarios and configurations of mMIMO systems were carefully assembled. Extensive simulations were conducted to analyze how the increasing number of BSs/APs affects the dimensionality of the input vector in the DNN algorithm. The observed improvements in system performance are quantified by the enhanced discriminative power of the model, illustrated through the cumulative distribution function (CDF). This metric encapsulates the model's ability to effectively capture and distinguish patterns across diverse scenarios and configurations within mMIMO systems. The parameter of the CDF being indicated is the probability. Specifically, the improved area under the CDF refers to an enhanced probability of a random variable falling below a certain threshold. This enhancement denotes improved model performance, showcasing a greater precision in predicting outcomes. Interestingly, the number of users was found to have a limited effect on system performance. The comparison between the DNN-based PC method and the conventional WMMSE method revealed the superior performance and efficiency of the DNN algorithm. Lastly, a comprehensive assessment of the DNN method against the WMMSE method was conducted for addressing the PC optimization problem in both CL and CF system architectures. In addition to, this thesis focuses on enhancing spectral efficiency (SE) in wireless communication systems, particularly within cell-free (CF) mmWave massive MIMO environments. It explores the challenges of optimizing SE through traditional methods, including the weighted minimum mean squared error (WMMSE), fractional programming (FP), water-filling, and max-min fairness approaches. The prevalence of access points (APs) over user equipment (UE) highlights the importance of zero-forcing precoding (ZFP) in CF-mMIMO. However, ZFP faces issues related to channel aging and resource utilization. To address these challenges, a novel scheme called delay-tolerant zero-forcing precoding (DT-ZFP) is introduced, leveraging deep learning-aided channel prediction to mitigate channel aging effects. Additionally, a cutting-edge power control (PC) method, HARP-PC, is proposed, combining heterogeneous graph neural network (HGNN), adaptive neuro-fuzzy inference system (ANFIS), and reinforcement learning (RL) to optimize SE in dynamic CF mmWave-mMIMO systems. This research advances the field by addressing these challenges and introducing innovative approaches to enhance PC and SE in contemporary wireless communication networks. Overall, this research contributes to the advancement of PC optimization in CL/CF-mMIMO systems through the application of ML techniques, demonstrating the potential of the DNN method, and providing insights into system performance under various scenarios and network configurations

    Adversarial Machine Learning in Wireless Communication Systems

    Get PDF
    We consider adversarial machine learning settings in wireless communication systems with adversaries that attempt to manipulate the deep learning (DL)-based wireless communication tasks, such as modulation classification and signal classification. In particular, we consider the evasion attack, i.e., adversarial attack, to which deep neural networks (DNNs) are known to be highly susceptible even under small-scale attacks. The shared and broadcast nature of wireless medium increases the potential for adversaries to tamper with DL-based wireless communication tasks. In this dissertation, we study the vulnerability of the DNNs used for various wireless communication applications to adversarial attacks. First, we present channel-aware adversarial attacks against DL-based wireless signal classifiers where a DNN is used at each receiver to classify over-the-air received signals to modulation types. We propose realistic attacks by considering channel effects from the adversary to each receiver, and a broadcast adversarial attack by crafting a common adversarial perturbation to simultaneously fool classifiers at different receivers. To mitigate the effect of the adversarial attack, we develop a certified defense scheme to guarantee the robustness of the classifier. Next, we consider an adversary that transmits adversarial perturbations using its multiple antennas to fool the classifier into misclassifying the received signals. From the adversarial machine learning perspective, we show how to utilize multiple antennas at the adversary to improve the adversarial attack performance. We consider power allocation among antennas and utilization of channel diversity while exploiting the multiple antennas at the adversary. We show that attack success increases as the number of antennas at the adversary increases. Then, we consider the privacy of wireless communications from an eavesdropper that employs a DL classifier to detect transmissions. In this setting, a transmitter transmits to its receiver in the presence of an eavesdropper, where a cooperative jammer (CJ) with multiple antennas transmits carefully crafted adversarial perturbations over-the-air to fool the eavesdropper into classifying the received superposition of signals as noise. We show that this adversarial perturbation causes the eavesdropper to misclassify the received signals as noise with a high probability while increasing the bit error rate (BER) at the legitimate receiver only slightly. Next, we consider an adversary that generates adversarial perturbation using a surrogate DNN model that is trained at the adversary. This surrogate model may differ from the transmitter's classifier significantly because the adversary and the transmitter experience different channels from the background emitter and therefore their classifiers are trained with different distributions of inputs. We consider different topologies to investigate how different surrogate models that are trained by the adversary (depending on the differences in channel effects experienced by the adversary) affect the performance of the adversarial attack. Then, we consider beam prediction problem using DNN for initial access (IA) in 5G and beyond communication systems where the user equipments (UEs) select the beam with the highest received signal strength (RSS) to establish their initial connection. We propose an adversarial attack to manipulate the over-the-air captured RSSs as the input to the DNN. This attack reduces the IA performance significantly and fools the DNN into choosing the beams with small RSSs. Next, we consider adversarial attacks on power allocation where the base station (BS) allocates its transmit power to multiple orthogonal subcarriers by using a DNN to serve multiple UEs. The DNN corresponds to a regression model which is trained with channel gains as the input and allocated transmit powers as the output. While the BS allocates the transmit power to the UEs to maximize rates for all UEs, an adversary aims to minimize these rates. We show that the regression-based DNN is susceptible to adversarial attacks, where the rate of communication is significantly affected. Finally, we consider reconfigurable intelligent surface (RIS)-aided wireless communication systems that improve the spectral efficiency and the coverage of wireless systems by electronically controlling the electromagnetic material in the presence of an eavesdropper. While there is an ongoing transmission boosted by the RIS, both the intended receiver and an eavesdropper individually aim to detect this transmission using their own DNN classifiers. The RIS interaction vector is designed by balancing two potentially conflicting objectives of focusing the transmitted signal to the receiver and keeping the transmitted signal away from the eavesdropper. To boost covert communications, adversarial perturbations are added to signals at the transmitter to fool the eavesdropper's classifier while keeping the effect on the receiver low. We show that adversarial perturbation and RIS interaction vector can be jointly designed to effectively increase the signal detection accuracy at the receiver while reducing the detection accuracy at the eavesdropper to enable covert communications

    Antennas and Propagation Aspects for Emerging Wireless Communication Technologies

    Get PDF
    The increasing demand for high data rate applications and the delivery of zero-latency multimedia content drives technological evolutions towards the design and implementation of next-generation broadband wireless networks. In this context, various novel technologies have been introduced, such as millimeter wave (mmWave) transmission, massive multiple input multiple output (MIMO) systems, and non-orthogonal multiple access (NOMA) schemes in order to support the vision of fifth generation (5G) wireless cellular networks. The introduction of these technologies, however, is inextricably connected with a holistic redesign of the current transceiver structures, as well as the network architecture reconfiguration. To this end, ultra-dense network deployment along with distributed massive MIMO technologies and intermediate relay nodes have been proposed, among others, in order to ensure an improved quality of services to all mobile users. In the same framework, the design and evaluation of novel antenna configurations able to support wideband applications is of utmost importance for 5G context support. Furthermore, in order to design reliable 5G systems, the channel characterization in these frequencies and in the complex propagation environments cannot be ignored because it plays a significant role. In this Special Issue, fourteen papers are published, covering various aspects of novel antenna designs for broadband applications, propagation models at mmWave bands, the deployment of NOMA techniques, radio network planning for 5G networks, and multi-beam antenna technologies for 5G wireless communications

    Mehrdimensionale Kanalschätzung für MIMO-OFDM

    Get PDF
    DIGITAL wireless communication started in the 1990s with the wide-spread deployment of GSM. Since then, wireless systems evolved dramatically. Current wireless standards approach the goal of an omnipresent communication system, which fulfils the wish to communicate with anyone, anywhere at anytime. Nowadays, the acceptance of smartphones and/or tablets is huge and the mobile internet is the core application. Given the current growth, the estimated data traffic in wireless networks in 2020 might be 1000 times higher than that of 2010, exceeding 127 exabyte. Unfortunately, the available radio spectrum is scarce and hence, needs to be utilized efficiently. Key technologies, such as multiple-input multiple-output (MIMO), orthogonal frequency-division multiplexing (OFDM) as well as various MIMO precoding techniques increase the theoretically achievable channel capacity considerably and are used in the majority of wireless standards. On the one hand, MIMO-OFDM promises substantial diversity and/or capacity gains. On the other hand, the complexity of optimum maximum-likelihood detection grows exponentially and is thus, not sustainable. Additionally, the required signaling overhead increases with the number of antennas and thereby reduces the bandwidth efficiency. Iterative receivers which jointly carry out channel estimation and data detection are a potential enabler to reduce the pilot overhead and approach optimum capacity at often reduced complexity. In this thesis, a graph-based receiver is developed, which iteratively performs joint data detection and channel estimation. The proposed multi-dimensional factor graph introduces transfer nodes that exploit correlation of adjacent channel coefficients in an arbitrary number of dimensions (e.g. time, frequency, and space). This establishes a simple and flexible receiver structure that facilitates soft channel estimation and data detection in multi-dimensional dispersive channels, and supports arbitrary modulation and channel coding schemes. However, the factor graph exhibits suboptimal cycles. In order to reach the maximum performance, the message exchange schedule, the process of combining messages, and the initialization are adapted. Unlike conventional approaches, which merge nodes of the factor graph to avoid cycles, the proposed message combining methods mitigate the impairing effects of short cycles and retain a low computational complexity. Furthermore, a novel detection algorithm is presented, which combines tree-based MIMO detection with a Gaussian detector. The resulting detector, termed Gaussian tree search detection, integrates well within the factor graph framework and reduces further the overall complexity of the receiver. Additionally, particle swarm optimization (PSO) is investigated for the purpose of initial channel estimation. The bio-inspired algorithm is particularly interesting because of its fast convergence to a reasonable MSE and its versatile adaptation to a variety of optimization problems. It is especially suited for initialization since no a priori information is required. A cooperative approach to PSO is proposed for large-scale antenna implementations as well as a multi-objective PSO for time-varying frequency-selective channels. The performance of the multi-dimensional graph-based soft iterative receiver is evaluated by means of Monte Carlo simulations. The achieved results are compared to the performance of an iterative state-of-the-art receiver. It is shown that a similar or better performance is achieved at a lower complexity. An appealing feature of iterative semi-blind channel estimation is that the supported pilot spacings may exceed the limits given the by Nyquist-Shannon sampling theorem. In this thesis, a relation between pilot spacing and channel code is formulated. Depending on the chosen channel code and code rate, the maximum spacing approaches the proposed “coded sampling bound”.Die digitale drahtlose Kommunikation begann in den 1990er Jahren mit der zunehmenden Verbreitung von GSM. Seitdem haben sich Mobilfunksysteme drastisch weiterentwickelt. Aktuelle Mobilfunkstandards nähern sich dem Ziel eines omnipräsenten Kommunikationssystems an und erfüllen damit den Wunsch mit jedem Menschen zu jeder Zeit an jedem Ort kommunizieren zu können. Heutzutage ist die Akzeptanz von Smartphones und Tablets immens und das mobile Internet ist die zentrale Anwendung. Ausgehend von dem momentanen Wachstum wird das Datenaufkommen in Mobilfunk-Netzwerken im Jahr 2020, im Vergleich zum Jahr 2010, um den Faktor 1000 gestiegen sein und 100 Exabyte überschreiten. Unglücklicherweise ist die verfügbare Bandbreite beschränkt und muss daher effizient genutzt werden. Schlüsseltechnologien, wie z.B. Mehrantennensysteme (multiple-input multiple-output, MIMO), orthogonale Frequenzmultiplexverfahren (orthogonal frequency-division multiplexing, OFDM) sowie weitere MIMO Codierverfahren, vergrößern die theoretisch erreichbare Kanalkapazität und kommen bereits in der Mehrheit der Mobil-funkstandards zum Einsatz. Auf der einen Seite verspricht MIMO-OFDM erhebliche Diversitäts- und/oder Kapazitätsgewinne. Auf der anderen Seite steigt die Komplexität der optimalen Maximum-Likelihood Detektion exponientiell und ist infolgedessen nicht haltbar. Zusätzlich wächst der benötigte Mehraufwand für die Kanalschätzung mit der Anzahl der verwendeten Antennen und reduziert dadurch die Bandbreiteneffizienz. Iterative Empfänger, die Datendetektion und Kanalschätzung im Verbund ausführen, sind potentielle Wegbereiter um den Mehraufwand des Trainings zu reduzieren und sich gleichzeitig der maximalen Kapazität mit geringerem Aufwand anzunähern. Im Rahmen dieser Arbeit wird ein graphenbasierter Empfänger für iterative Datendetektion und Kanalschätzung entwickelt. Der vorgeschlagene multidimensionale Faktor Graph führt sogenannte Transferknoten ein, die die Korrelation benachbarter Kanalkoeffizienten in beliebigen Dimensionen, z.B. Zeit, Frequenz und Raum, ausnutzen. Hierdurch wird eine einfache und flexible Empfängerstruktur realisiert mit deren Hilfe weiche Kanalschätzung und Datendetektion in mehrdimensionalen, dispersiven Kanälen mit beliebiger Modulation und Codierung durchgeführt werden kann. Allerdings weist der Faktorgraph suboptimale Schleifen auf. Um die maximale Performance zu erreichen, wurde neben dem Ablauf des Nachrichtenaustausches und des Vorgangs zur Kombination von Nachrichten auch die Initialisierung speziell angepasst. Im Gegensatz zu herkömmlichen Methoden, bei denen mehrere Knoten zur Vermeidung von Schleifen zusammengefasst werden, verringern die vorgeschlagenen Methoden die leistungsmindernde Effekte von Schleifen, erhalten aber zugleich die geringe Komplexität des Empfängers. Zusätzlich wird ein neuartiger Detektionsalgorithmus vorgestellt, der baumbasierte Detektionsalgorithmen mit dem sogenannten Gauss-Detektor verknüpft. Der resultierende baumbasierte Gauss-Detektor (Gaussian tree search detector) lässt sich ideal in das graphenbasierte Framework einbinden und verringert weiter die Gesamtkomplexität des Empfängers. Zusätzlich wird Particle Swarm Optimization (PSO) zum Zweck der initialen Kanalschätzung untersucht. Der biologisch inspirierte Algorithmus ist insbesonders wegen seiner schnellen Konvergenz zu einem akzeptablen MSE und seiner vielseitigen Abstimmungsmöglichkeiten auf eine Vielzahl von Optimierungsproblemen interessant. Da PSO keine a priori Informationen benötigt, ist er speziell für die Initialisierung geeignet. Sowohl ein kooperativer Ansatz für PSO für Antennensysteme mit extrem vielen Antennen als auch ein multi-objective PSO für Kanäle, die in Zeit und Frequenz dispersiv sind, werden evaluiert. Die Leistungsfähigkeit des multidimensionalen graphenbasierten iterativen Empfängers wird mit Hilfe von Monte Carlo Simulationen untersucht. Die Simulationsergebnisse werden mit denen eines dem Stand der Technik entsprechenden Empfängers verglichen. Es wird gezeigt, dass ähnliche oder bessere Ergebnisse mit geringerem Aufwand erreicht werden. Eine weitere ansprechende Eigenschaft von iterativen semi-blinden Kanalschätzern ist, dass der mögliche Abstand von Trainingssymbolen die Grenzen des Nyquist-Shannon Abtasttheorem überschreiten kann. Im Rahmen dieser Arbeit wird eine Beziehung zwischen dem Trainingsabstand und dem Kanalcode formuliert. In Abhängigkeit des gewählten Kanalcodes und der Coderate folgt der maximale Trainingsabstand der vorgeschlagenen “coded sampling bound”

    Intelligent and Efficient Ultra-Dense Heterogeneous Networks for 5G and Beyond

    Get PDF
    Ultra-dense heterogeneous network (HetNet), in which densified small cells overlaying the conventional macro-cells, is a promising technique for the fifth-generation (5G) mobile network. The dense and multi-tier network architecture is able to support the extensive data traffic and diverse quality of service (QoS) but meanwhile arises several challenges especially on the interference coordination and resource management. In this thesis, three novel network schemes are proposed to achieve intelligent and efficient operation based on the deep learning-enabled network awareness. Both optimization and deep learning methods are developed to achieve intelligent and efficient resource allocation in these proposed network schemes. To improve the cost and energy efficiency of ultra-dense HetNets, a hotspot prediction based virtual small cell (VSC) network is proposed. A VSC is formed only when the traffic volume and user density are extremely high. We leverage the feature extraction capabilities of deep learning techniques and exploit a long-short term memory (LSTM) neural network to predict potential hotspots and form VSC. Large-scale antenna array enabled hybrid beamforming is also adaptively adjusted for highly directional transmission to cover these VSCs. Within each VSC, one user equipment (UE) is selected as a cell head (CH), which collects the intra-cell traffic using the unlicensed band and relays the aggregated traffic to the macro-cell base station (MBS) in the licensed band. The inter-cell interference can thus be reduced, and the spectrum efficiency can be improved. Numerical results show that proposed VSCs can reduce 55%55\% power consumption in comparison with traditional small cells. In addition to the smart VSCs deployment, a novel multi-dimensional intelligent multiple access (MD-IMA) scheme is also proposed to achieve stringent and diverse QoS of emerging 5G applications with disparate resource constraints. Multiple access (MA) schemes in multi-dimensional resources are adaptively scheduled to accommodate dynamic QoS requirements and network states. The MD-IMA learns the integrated-quality-of-system-experience (I-QoSE) by monitoring and predicting QoS through the LSTM neural network. The resource allocation in the MD-IMA scheme is formulated as an optimization problem to maximize the I-QoSE as well as minimize the non-orthogonality (NO) in view of implementation constraints. In order to solve this problem, both model-based optimization algorithms and model-free deep reinforcement learning (DRL) approaches are utilized. Simulation results demonstrate that the achievable I-QoSE gain of MD-IMA over traditional MA is 15%15\% - 18%18\%. In the final part of the thesis, a Software-Defined Networking (SDN) enabled 5G-vehicle ad hoc networks (VANET) is designed to support the growing vehicle-generated data traffic. In this integrated architecture, to reduce the signaling overhead, vehicles are clustered under the coordination of SDN and one vehicle in each cluster is selected as a gateway to aggregate intra-cluster traffic. To ensure the capacity of the trunk-link between the gateway and macro base station, a Non-orthogonal Multiplexed Modulation (NOMM) scheme is proposed to split aggregated data stream into multi-layers and use sparse spreading code to partially superpose the modulated symbols on several resource blocks. The simulation results show that the energy efficiency performance of proposed NOMM is around 1.5-2 times than that of the typical orthogonal transmission scheme

    Satellite Communications

    Get PDF
    This study is motivated by the need to give the reader a broad view of the developments, key concepts, and technologies related to information society evolution, with a focus on the wireless communications and geoinformation technologies and their role in the environment. Giving perspective, it aims at assisting people active in the industry, the public sector, and Earth science fields as well, by providing a base for their continued work and thinking

    Optimization Methods for Heterogeneous Wireless Communication Networks: Planning, Configuration and Operation

    Get PDF
    With the fourth generation of wireless radio communication networks reaching maturity, the upcoming fifth generation (5G) is a major subject of current research. 5G networks are designed to achieve a multitude of performance gains and the ability to provide services dedicated to various application scenarios. These applications include those that require increased network throughput, low latency, high reliability and support for a very high number of connected devices. Since the achieved throughput on a single point-to-point transmission is already close to the theoretical optimum, more efforts need to be invested to enable further performance gains in 5G. Technology candidates for future wireless networks include using very large antenna arrays with hundreds of antenna elements or expanding the bandwidth used for transmission to the millimeter-wave spectrum. Both these and other envisioned approaches require significant changes to the network architecture and a high economic commitment from the network operator. An already well established technology for expanding the throughput of a wireless communication network is a densification of the cellular layout. This is achieved by supplementing the existing, usually high-power, macro cells with a larger number of low-power small cells, resulting in a so-called heterogeneous network (HetNet). This approach builds upon the existing network infrastructure and has been shown to support the aforementioned technologies requiring more sophisticated hardware. Network densification using small cells can therefore be considered a suitable bridging technology to path the way for 5G and subsequent generations of mobile communication networks. The most significant challenge associated with HetNets is that the densification is only beneficial for the overall network performance up to a certain density, and can be harmful beyond that point. The network throughput is limited by the additional interferences caused by the close proximity of cells, and the economic operability of the network is limited by the vastly increased energy consumption and hardware cost associated with dense cell deployment. This dissertation addresses the challenge of enabling reliable performance gains through network densification while guaranteeing quality-of-service conditions and economic operability. The proposed approach is to address the underlying problem vertically over multiple layers, which differ in the time horizon on which network optimization measures are initiated, necessary information is gathered, and an optimized solutions are found. These time horizons are classified as network planning phase, network configuration phase, and network operation phase. Optimization schemes are developed for optimizing the resource- and energy consumption that operate mostly in the network configuration phase. Since these approaches require a load-balanced network, schemes to achieve and maintain load balancing between cells are introduced for the network planning phase and operation phase, respectively. For the network planning phase, an approach is proposed for optimizing the locations of additional small cells in an existing wireless network architecture, and to schedule their activity phases in advance according to data demand forecasts. Optimizing the locations of multiple cells jointly is shown to be superior to deploying them one-by-one based on greedy heuristic approaches. Furthermore, the cell activity scheduling obtains the highest load balancing performance if the time-schedule and the durations of activity periods is jointly optimized, which is an approach originating from process engineering. Simulation results show that the load levels of overloaded cells can be effectively decreased in the network planning phase by choosing optimized deployment locations and cell activity periods. Operating the network with a high resource efficiency while ensuring quality-of-service constraints is addressed using resource optimization in the network configuration phase. An optimization problem to minimize the resource consumption of the network by operating multiple separated resource slices is designed. The originally problem, which is computationally intractable for large networks, is reformulated with a linear inner approximation, that is shown to achieve close to optimal performance. The interference is approximated with a dynamic model that achieves a closer approximation of the actual cell load than the static worst-case model established in comparable state-ot-the art approaches. In order to mitigate the increase in energy consumption associated with the increase in cell density, an energy minimization problem is proposed that jointly optimizes the transmit power and activity status of all cells in the network. An original problem formulation is designed and an inner approximation with better computational tractability is proposed. Energy consumption levels of a HetNet are simulated for multiple energy minimization approaches. The proposed method achieves lower energy consumption levels than approaches based on an exhaustive search over all cell activity configurations or heuristic power scaling. Additionally, in simulations, the likelihood of finding an energy minimized solution that satisfies quality-of-service constraints is shown to be significantly higher for the proposed approach. Finally, the problem of maintaining load balancing while the network is in operation is addressed with a decentralized scheme based on a learning system using multi-class support vector machines. Established methods often require significant information exchange between network entities and a centralized optimization of the network to achieve load balancing. In this dissertation, a decentralized learning system is proposed that globally balance the load levels close to the optimal solution while only requiring limited local information exchange

    Visible Light Communication (VLC)

    Get PDF
    Visible light communication (VLC) using light-emitting diodes (LEDs) or laser diodes (LDs) has been envisioned as one of the key enabling technologies for 6G and Internet of Things (IoT) systems, owing to its appealing advantages, including abundant and unregulated spectrum resources, no electromagnetic interference (EMI) radiation and high security. However, despite its many advantages, VLC faces several technical challenges, such as the limited bandwidth and severe nonlinearity of opto-electronic devices, link blockage and user mobility. Therefore, significant efforts are needed from the global VLC community to develop VLC technology further. This Special Issue, “Visible Light Communication (VLC)”, provides an opportunity for global researchers to share their new ideas and cutting-edge techniques to address the above-mentioned challenges. The 16 papers published in this Special Issue represent the fascinating progress of VLC in various contexts, including general indoor and underwater scenarios, and the emerging application of machine learning/artificial intelligence (ML/AI) techniques in VLC

    Advanced receivers for distributed cooperation in mobile ad hoc networks

    Get PDF
    Mobile ad hoc networks (MANETs) are rapidly deployable wireless communications systems, operating with minimal coordination in order to avoid spectral efficiency losses caused by overhead. Cooperative transmission schemes are attractive for MANETs, but the distributed nature of such protocols comes with an increased level of interference, whose impact is further amplified by the need to push the limits of energy and spectral efficiency. Hence, the impact of interference has to be mitigated through with the use PHY layer signal processing algorithms with reasonable computational complexity. Recent advances in iterative digital receiver design techniques exploit approximate Bayesian inference and derivative message passing techniques to improve the capabilities of well-established turbo detectors. In particular, expectation propagation (EP) is a flexible technique which offers attractive complexity-performance trade-offs in situations where conventional belief propagation is limited by computational complexity. Moreover, thanks to emerging techniques in deep learning, such iterative structures are cast into deep detection networks, where learning the algorithmic hyper-parameters further improves receiver performance. In this thesis, EP-based finite-impulse response decision feedback equalizers are designed, and they achieve significant improvements, especially in high spectral efficiency applications, over more conventional turbo-equalization techniques, while having the advantage of being asymptotically predictable. A framework for designing frequency-domain EP-based receivers is proposed, in order to obtain detection architectures with low computational complexity. This framework is theoretically and numerically analysed with a focus on channel equalization, and then it is also extended to handle detection for time-varying channels and multiple-antenna systems. The design of multiple-user detectors and the impact of channel estimation are also explored to understand the capabilities and limits of this framework. Finally, a finite-length performance prediction method is presented for carrying out link abstraction for the EP-based frequency domain equalizer. The impact of accurate physical layer modelling is evaluated in the context of cooperative broadcasting in tactical MANETs, thanks to a flexible MAC-level simulato
    corecore