3,962 research outputs found

    A Joint Tensor Completion and Prediction Scheme for Multi-Dimensional Spectrum Map Construction

    Get PDF
    Spectrum data, which are usually characterized by many dimensions, such as location, frequency, time, and signal strength, present formidable challenges in terms of acquisition, processing, and visualization. In practice, a portion of spectrum data entries may be unavailable due to the interference during the acquisition process or compression during the sensing process. Nevertheless, the completion work in multi-dimensional spectrum data has drawn few attention to the researchers working in the eld. In this paper, we rst put forward the concept of spectrum tensor to depict the multi-dimensional spectrum data. Then, we develop a joint tensor completion and prediction scheme, which combines an improved tensor completion algorithm with prediction models to retrieve the incomplete measurements. Moreover, we build an experimental platform using Universal Software Radio Peripheral to collect real-world spectrum tensor data. Experimental results demonstrate that the effectiveness of the proposed joint tensor processing scheme is superior than relying on the completion or prediction scheme only

    A Tutorial on Environment-Aware Communications via Channel Knowledge Map for 6G

    Full text link
    Sixth-generation (6G) mobile communication networks are expected to have dense infrastructures, large-dimensional channels, cost-effective hardware, diversified positioning methods, and enhanced intelligence. Such trends bring both new challenges and opportunities for the practical design of 6G. On one hand, acquiring channel state information (CSI) in real time for all wireless links becomes quite challenging in 6G. On the other hand, there would be numerous data sources in 6G containing high-quality location-tagged channel data, making it possible to better learn the local wireless environment. By exploiting such new opportunities and for tackling the CSI acquisition challenge, there is a promising paradigm shift from the conventional environment-unaware communications to the new environment-aware communications based on the novel approach of channel knowledge map (CKM). This article aims to provide a comprehensive tutorial overview on environment-aware communications enabled by CKM to fully harness its benefits for 6G. First, the basic concept of CKM is presented, and a comparison of CKM with various existing channel inference techniques is discussed. Next, the main techniques for CKM construction are discussed, including both the model-free and model-assisted approaches. Furthermore, a general framework is presented for the utilization of CKM to achieve environment-aware communications, followed by some typical CKM-aided communication scenarios. Finally, important open problems in CKM research are highlighted and potential solutions are discussed to inspire future work

    Machine Learning Tools for Radio Map Estimation in Fading-Impaired Channels

    Get PDF
    In spectrum cartography, also known as radio map estimation, one constructs maps that provide the value of a given channel metric such as as the received power, power spectral density (PSD), electromagnetic absorption, or channel-gain for every spatial location in the geographic area of interest. The main idea is to deploy sensors and measure the target channel metric at a set of locations and interpolate or extrapolate the measurements. Radio maps nd a myriad of applications in wireless communications such as network planning, interference coordination, power control, spectrum management, resource allocation, handoff optimization, dynamic spectrum access, and cognitive radio. More recently, radio maps have been widely recognized as an enabling technology for unmanned aerial vehicle (UAV) communications because they allow autonomous UAVs to account for communication constraints when planning a mission. Additional use cases include radio tomography and source localization.publishedVersio

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore