1,072 research outputs found

    Spatial modulation schemes and modem architectures for millimeter wave radio systems

    Get PDF
    The rapid growth of wireless industry opens the door to several use cases such as internet of things and device-to-device communications, which require boosting the reliability and the spectral efficiency of the wireless access network, while reducing the energy consumption at the terminals. The vast spectrum available in millimeter-wave (mmWave) frequency band is one of the most promising candidates to achieve high-speed communications. However, the propagation of the radio signals at high carrier frequencies suffers from severe path-loss which reduces the coverage area. Fortunately, the small wavelengths of the mmWave signals allow packing a large number of antennas not only at the base station (BS) but also at the user terminal (UT). These massive antenna arrays can be exploited to attain high beamforming and combining gains and overcome the path-loss associated with the mmWave propagation. In conventional (fully digital) multiple-input-multiple-output (MIMO) transceivers, each antenna is connected to a specific radio-frequency (RF) chain and high resolution analog-to-digital-converter. Unfortunately, these devices are expensive and power hungry especially at mmWave frequency band and when operating in large bandwidths. Having this in mind, several MIMO transceiver architectures have been proposed with the purpose of reducing the hardware cost and the energy consumption. Fully connected hybrid analog and digital precoding schemes were proposed in with the aim of replacing some of the conventional RF chains by energy efficient analog devices. These fully connected mapping requires many analog devices that leads to non-negligible energy consumption. Partially connected hybrid architectures have been proposed to improve the energy efficiency of the fully connected transceivers by reducing the number of analog devices. Simplifying the transceiver’s architecture to reduce the power consumption results in a degradation of the attained spectral efficiency. In this PhD dissertation, we propose novel modulation schemes and massive MIMO transceiver design to combat the challenges at the mmWave cellular systems. The structure of the doctoral manuscript can be expressed as In Chapter 1, we introduce the transceiver design challenges at mmWave cellular communications. Then, we illustrate several state of the art architectures and highlight their limitations. After that, we propose scheme that attains high-energy efficiency and spectrum efficiency. In chapter 2, first, we mathematically describe the state of the art of the SM and highlight the main challenges with these schemes when applied at mmWave frequency band. In order to combat these challenges (for example, high cost and high power consumption), we propose novel SM schemes specifically designed for mmWave massive MIMO systems. After that, we explain how these schemes can be exploited in attaining energy efficient UT architecture. Finally, we present the channel model, systems assumptions and the transceiver devices power consumption models. In chapter 3, we consider single user SM system. First, we propose downlink (DL) receive SM (RSM) scheme where the UT can be implemented with single or multiple radio-frequency chains and the BS can be fully digital or hybrid architecture. Moreover, we consider different precoders at the BS and propose low complexity and efficient antenna selection schemes for narrowband and wideband transmissions. After that, we propose joint uplink-downlink SM scheme where we consider RSM in the DL and transmit SM (TSM) in the UL based on energy efficient hybrid UT architecture. In chapter 4, we extend the SM system to the multi-user case. Specifically, we develop joint multi-user power allocation, user selection and antenna selection algorithms for the broadcast and the multiple access channels. Chapter 5 is presented for concluding the thesis and proposing future research directions.Considerando los altos requerimientos de los servicios de nueva generación, las infraestructuras de red actual se han visto obligadas a evolucionar en la forma de manejar los diferentes recursos de red y computación. Con este fin, nuevas tecnologías han surgido para soportar las funcionalidades necesarias para esta evolución, significando también un gran cambio de paradigma en el diseño de arquitecturas para la futura implementación de redes.En este sentido, este documento de tesis doctoral presenta un análisis sobre estas tecnologías, enfocado en el caso de redes inter/intra Data Centre. Por consiguiente, la introducción de tecnologías basadas en redes ópticas ha sido estudiada, con el fin de identificar problemas actuales que puedan llegar a ser solucionados mediante el diseño y aplicación de nuevas técnicas, asimismo como a través del desarrollo o la extensión de los componentes de arquitectura de red.Con este propósito, se han definido una serie de propuestas relacionadas con aspectos cruciales, así como el control de dispositivos ópticos por SDN para habilitar el manejo de redes híbridas, la necesidad de definir un mecanismo de descubrimiento de topologías ópticas capaz de exponer información precisa, y el analizar las brechas existentes para la definición de una arquitectura común en fin de soportar las comunicaciones 5G.Para validar estas propuestas, se han presentado una serie de validaciones experimentales por medio de escenarios de prueba específicos, demostrando los avances en control, orquestación, virtualización y manejo de recursos con el fin de optimizar su utilización. Los resultados expuestos, además de corroborar la correcta operación de los métodos y componentes propuestos, abre el camino hacia nuevas formas de adaptar los actuales despliegues de red respecto a los desafíos definidos en el inicio de una nueva era de las telecomunicaciones.Postprint (published version

    Interference-Aware Scheduling for Connectivity in MIMO Ad Hoc Multicast Networks

    Full text link
    We consider a multicast scenario involving an ad hoc network of co-channel MIMO nodes in which a source node attempts to share a streaming message with all nodes in the network via some pre-defined multi-hop routing tree. The message is assumed to be broken down into packets, and the transmission is conducted over multiple frames. Each frame is divided into time slots, and each link in the routing tree is assigned one time slot in which to transmit its current packet. We present an algorithm for determining the number of time slots and the scheduling of the links in these time slots in order to optimize the connectivity of the network, which we define to be the probability that all links can achieve the required throughput. In addition to time multiplexing, the MIMO nodes also employ beamforming to manage interference when links are simultaneously active, and the beamformers are designed with the maximum connectivity metric in mind. The effects of outdated channel state information (CSI) are taken into account in both the scheduling and the beamforming designs. We also derive bounds on the network connectivity and sum transmit power in order to illustrate the impact of interference on network performance. Our simulation results demonstrate that the choice of the number of time slots is critical in optimizing network performance, and illustrate the significant advantage provided by multiple antennas in improving network connectivity.Comment: 34 pages, 12 figures, accepted by IEEE Transactions on Vehicular Technology, Dec. 201

    Novel LDPC coding and decoding strategies: design, analysis, and algorithms

    Get PDF
    In this digital era, modern communication systems play an essential part in nearly every aspect of life, with examples ranging from mobile networks and satellite communications to Internet and data transfer. Unfortunately, all communication systems in a practical setting are noisy, which indicates that we can either improve the physical characteristics of the channel or find a possible systematical solution, i.e. error control coding. The history of error control coding dates back to 1948 when Claude Shannon published his celebrated work “A Mathematical Theory of Communication”, which built a framework for channel coding, source coding and information theory. For the first time, we saw evidence for the existence of channel codes, which enable reliable communication as long as the information rate of the code does not surpass the so-called channel capacity. Nevertheless, in the following 60 years none of the codes have been proven closely to approach the theoretical bound until the arrival of turbo codes and the renaissance of LDPC codes. As a strong contender of turbo codes, the advantages of LDPC codes include parallel implementation of decoding algorithms and, more crucially, graphical construction of codes. However, there are also some drawbacks to LDPC codes, e.g. significant performance degradation due to the presence of short cycles or very high decoding latency. In this thesis, we will focus on the practical realisation of finite-length LDPC codes and devise algorithms to tackle those issues. Firstly, rate-compatible (RC) LDPC codes with short/moderate block lengths are investigated on the basis of optimising the graphical structure of the tanner graph (TG), in order to achieve a variety of code rates (0.1 < R < 0.9) by only using a single encoder-decoder pair. As is widely recognised in the literature, the presence of short cycles considerably reduces the overall performance of LDPC codes which significantly limits their application in communication systems. To reduce the impact of short cycles effectively for different code rates, algorithms for counting short cycles and a graph-related metric called Extrinsic Message Degree (EMD) are applied with the development of the proposed puncturing and extension techniques. A complete set of simulations are carried out to demonstrate that the proposed RC designs can largely minimise the performance loss caused by puncturing or extension. Secondly, at the decoding end, we study novel decoding strategies which compensate for the negative effect of short cycles by reweighting part of the extrinsic messages exchanged between the nodes of a TG. The proposed reweighted belief propagation (BP) algorithms aim to implement efficient decoding, i.e. accurate signal reconstruction and low decoding latency, for LDPC codes via various design methods. A variable factor appearance probability belief propagation (VFAP-BP) algorithm is proposed along with an improved version called a locally-optimized reweighted (LOW)-BP algorithm, both of which can be employed to enhance decoding performance significantly for regular and irregular LDPC codes. More importantly, the optimisation of reweighting parameters only takes place in an offline stage so that no additional computational complexity is required during the real-time decoding process. Lastly, two iterative detection and decoding (IDD) receivers are presented for multiple-input multiple-output (MIMO) systems operating in a spatial multiplexing configuration. QR decomposition (QRD)-type IDD receivers utilise the proposed multiple-feedback (MF)-QRD or variable-M (VM)-QRD detection algorithm with a standard BP decoding algorithm, while knowledge-aided (KA)-type receivers are equipped with a simple soft parallel interference cancellation (PIC) detector and the proposed reweighted BP decoders. In the uncoded scenario, the proposed MF-QRD and VM-QRD algorithms are shown to approach optimal performance, yet require a reduced computational complexity. In the LDPC-coded scenario, simulation results have illustrated that the proposed QRD-type IDD receivers can offer near-optimal performance after a small number of detection/decoding iterations and the proposed KA-type IDD receivers significantly outperform receivers using alternative decoding algorithms, while requiring similar decoding complexity

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions
    corecore