83 research outputs found
Channel Acquisition for HF Skywave Massive MIMO-OFDM Communications
In this paper, we investigate channel acquisition for high frequency (HF)
skywave massive multiple-input multiple-output (MIMO) communications with
orthogonal frequency division multiplexing (OFDM) modulation. We first
introduce the concept of triple beams (TBs) in the space-frequency-time (SFT)
domain and establish a TB based channel model using sampled triple steering
vectors. With the established channel model, we then investigate the optimal
channel estimation and pilot design for pilot segments. Specifically, we find
the conditions that allow pilot reuse among multiple user terminals (UTs),
which significantly reduces pilot overhead. Moreover, we propose a channel
prediction method for data segments based on the estimated TB domain channel.
To reduce the complexity, we are able to formulate the channel estimation as a
sparse signal recovery problem due to the channel sparsity in the TB domain and
then obtain the channel by the proposed constrained Bethe free energy
minimization (CBFEM) based channel estimation algorithm, which can be
implemented with low complexity by exploiting the structure of the TB matrix
together with the chirp z-transform (CZT). Simulation results demonstrate the
superior performance of the proposed channel acquisition approach.Comment: 30 pages, 4 figure
Low-resolution ADC receiver design, MIMO interference cancellation prototyping, and PHY secrecy analysis.
This dissertation studies three independent research topics in the general field of wireless communications. The first topic focuses on new receiver design with low-resolution analog-to-digital converters (ADC). In future massive multiple-input-multiple-output (MIMO) systems, multiple high-speed high-resolution ADCs will become a bottleneck for practical applications because of the hardware complexity and power consumption. One solution to this problem is to adopt low-cost low-precision ADCs instead. In Chapter II, MU-MIMO-OFDM systems only equipped with low-precision ADCs are considered. A new turbo receiver structure is proposed to improve the overall system performance. Meanwhile, ultra-low-cost communication devices can enable massive deployment of disposable wireless relays. In Chapter III, the feasibility of using a one-bit relay cluster to help a power-constrained transmitter for distant communication is investigated. Nonlinear estimators are applied to enable effective decoding. The second topic focuses prototyping and verification of a LTE and WiFi co-existence system, where the operation of LTE in unlicensed spectrum (LTE-U) is discussed. LTE-U extends the benefits of LTE and LTE Advanced to unlicensed spectrum, enabling mobile operators to offload data traffic onto unlicensed frequencies more efficiently and effectively. With LTE-U, operators can offer consumers a more robust and seamless mobile broadband experience with better coverage and higher download speeds. As the coexistence leads to considerable performance instability of both LTE and WiFi transmissions, the LTE and WiFi receivers with MIMO interference canceller are designed and prototyped to support the coexistence in Chapter IV. The third topic focuses on theoretical analysis of physical-layer secrecy with finite blocklength. Unlike upper layer security approaches, the physical-layer communication security can guarantee information-theoretic secrecy. Current studies on the physical-layer secrecy are all based on infinite blocklength. Nevertheless, these asymptotic studies are unrealistic and the finite blocklength effect is crucial for practical secrecy communication. In Chapter V, a practical analysis of secure lattice codes is provided
On the Intersection of Communication and Machine Learning
The intersection of communication and machine learning is attracting increasing interest from both communities. On the one hand, the development of modern communication system brings large amount of data and high performance requirement, which challenges the classic analytical-derivation based study philosophy and encourages the researchers to explore the data driven method, such as machine learning, to solve the problems with high complexity and large scale. On the other hand, the usage of distributed machine learning introduces the communication cost as one of the basic considerations for the design of machine learning algorithm and system.In this thesis, we first explore the application of machine learning on one of the classic problems in wireless network, resource allocation, for heterogeneous millimeter wave networks when the environment is with high dynamics. We address the practical concerns by providing the efficient online and distributed framework. In the second part, some sampling based communication-efficient distributed learning algorithm is proposed. We utilize the trade-off between the local computation and the total communication cost and propose the algorithm with good theoretical bound. In more detail, this thesis makes the following contributionsWe introduced an reinforcement learning framework to solve the resource allocation problems in heterogeneous millimeter wave network. The large state/action space is decomposed according to the topology of the network and solved by an efficient distribtued message passing algorithm. We further speed up the inference process by an online updating process.We proposed the distributed coreset based boosting framework. An efficient coreset construction algorithm is proposed based on the prior knowledge provided by clustering. Then the coreset is integrated with boosting with improved convergence rate. We extend the proposed boosting framework to the distributed setting, where the communication cost is reduced by the good approximation of coreset.We propose an selective sampling framework to construct a subset of sample that could effectively represent the model space. Based on the prior distribution of the model space or the large amount of samples from model space, we derive a computational efficient method to construct such subset by minimizing the error of classifying a classifier
Domain Generalization in Machine Learning Models for Wireless Communications: Concepts, State-of-the-Art, and Open Issues
Data-driven machine learning (ML) is promoted as one potential technology to
be used in next-generations wireless systems. This led to a large body of
research work that applies ML techniques to solve problems in different layers
of the wireless transmission link. However, most of these applications rely on
supervised learning which assumes that the source (training) and target (test)
data are independent and identically distributed (i.i.d). This assumption is
often violated in the real world due to domain or distribution shifts between
the source and the target data. Thus, it is important to ensure that these
algorithms generalize to out-of-distribution (OOD) data. In this context,
domain generalization (DG) tackles the OOD-related issues by learning models on
different and distinct source domains/datasets with generalization capabilities
to unseen new domains without additional finetuning. Motivated by the
importance of DG requirements for wireless applications, we present a
comprehensive overview of the recent developments in DG and the different
sources of domain shift. We also summarize the existing DG methods and review
their applications in selected wireless communication problems, and conclude
with insights and open questions
Advanced receivers for distributed cooperation in mobile ad hoc networks
Mobile ad hoc networks (MANETs) are rapidly deployable wireless communications systems, operating with minimal coordination in order to avoid spectral efficiency losses caused by overhead. Cooperative transmission schemes are attractive for MANETs, but the distributed nature of such protocols comes with an increased level of interference, whose impact is further amplified by the need to push the limits of energy and spectral efficiency. Hence, the impact of interference has to be mitigated through with the use PHY layer signal processing algorithms with reasonable computational complexity. Recent advances in iterative digital receiver design techniques exploit approximate Bayesian inference and derivative message passing techniques to improve the capabilities of well-established turbo detectors. In particular, expectation propagation (EP) is a flexible technique which offers attractive complexity-performance trade-offs in situations where conventional belief propagation is limited by computational complexity. Moreover, thanks to emerging techniques in deep learning, such iterative structures are cast into deep detection networks, where learning the algorithmic hyper-parameters further improves receiver performance. In this thesis, EP-based finite-impulse response decision feedback equalizers are designed, and they achieve significant improvements, especially in high spectral efficiency applications, over more conventional turbo-equalization techniques, while having the advantage of being asymptotically predictable. A framework for designing frequency-domain EP-based receivers is proposed, in order to obtain detection architectures with low computational complexity. This framework is theoretically and numerically analysed with a focus on channel equalization, and then it is also extended to handle detection for time-varying channels and multiple-antenna systems. The design of multiple-user detectors and the impact of channel estimation are also explored to understand the capabilities and limits of this framework. Finally, a finite-length performance prediction method is presented for carrying out link abstraction for the EP-based frequency domain equalizer. The impact of accurate physical layer modelling is evaluated in the context of cooperative broadcasting in tactical MANETs, thanks to a flexible MAC-level simulato
Accelerated and Deep Expectation Maximization for One-Bit MIMO-OFDM Detection
In this paper we study the expectation maximization (EM) technique for
one-bit MIMO-OFDM detection (OMOD). Arising from the recent interest in massive
MIMO with one-bit analog-to-digital converters, OMOD is a massive-scale
problem. EM is an iterative method that can exploit the OFDM structure to
process the problem in a per-iteration efficient fashion. In this study we
analyze the convergence rate of EM for a class of approximate
maximum-likelihood OMOD formulations, or, in a broader sense, a class of
problems involving regression from quantized data. We show how the SNR and
channel conditions can have an impact on the convergence rate. We do so by
making a connection between the EM and the proximal gradient methods in the
context of OMOD. This connection also gives us insight to build new accelerated
and/or inexact EM schemes. The accelerated scheme has faster convergence in
theory, and the inexact scheme provides us with the flexibility to implement EM
more efficiently, with convergence guarantee. Furthermore we develop a deep EM
algorithm, wherein we take the structure of our inexact EM algorithm and apply
deep unfolding to train an efficient structured deep net. Simulation results
show that our accelerated exact/inexact EM algorithms run much faster than
their standard EM counterparts, and that the deep EM algorithm gives promising
detection and runtime performances
- …