2 research outputs found

    Learning-Based Adaptive Transmission for Limited Feedback Multiuser MIMO-OFDM

    Full text link
    Performing link adaptation in a multiantenna and multiuser system is challenging because of the coupling between precoding, user selection, spatial mode selection and use of limited feedback about the channel. The problem is exacerbated by the difficulty of selecting the proper modulation and coding scheme when using orthogonal frequency division multiplexing (OFDM). This paper presents a data-driven approach to link adaptation for multiuser multiple input mulitple output (MIMO) OFDM systems. A machine learning classifier is used to select the modulation and coding scheme, taking as input the SNR values in the different subcarriers and spatial streams. A new approximation is developed to estimate the unknown interuser interference due to the use of limited feedback. This approximation allows to obtain SNR information at the transmitter with a minimum communication overhead. A greedy algorithm is used to perform spatial mode and user selection with affordable complexity, without resorting to an exhaustive search. The proposed adaptation is studied in the context of the IEEE 802.11ac standard, and is shown to schedule users and adjust the transmission parameters to the channel conditions as well as to the rate of the feedback channel

    Machine Learning for Wireless Communications in the Internet of Things: A Comprehensive Survey

    Full text link
    The Internet of Things (IoT) is expected to require more effective and efficient wireless communications than ever before. For this reason, techniques such as spectrum sharing, dynamic spectrum access, extraction of signal intelligence and optimized routing will soon become essential components of the IoT wireless communication paradigm. Given that the majority of the IoT will be composed of tiny, mobile, and energy-constrained devices, traditional techniques based on a priori network optimization may not be suitable, since (i) an accurate model of the environment may not be readily available in practical scenarios; (ii) the computational requirements of traditional optimization techniques may prove unbearable for IoT devices. To address the above challenges, much research has been devoted to exploring the use of machine learning to address problems in the IoT wireless communications domain. This work provides a comprehensive survey of the state of the art in the application of machine learning techniques to address key problems in IoT wireless communications with an emphasis on its ad hoc networking aspect. First, we present extensive background notions of machine learning techniques. Then, by adopting a bottom-up approach, we examine existing work on machine learning for the IoT at the physical, data-link and network layer of the protocol stack. Thereafter, we discuss directions taken by the community towards hardware implementation to ensure the feasibility of these techniques. Additionally, before concluding, we also provide a brief discussion of the application of machine learning in IoT beyond wireless communication. Finally, each of these discussions is accompanied by a detailed analysis of the related open problems and challenges.Comment: Ad Hoc Networks Journa
    corecore