61 research outputs found

    Symbol Detection in 5G and Beyond Networks

    Get PDF
    Beyond 5G networks are expected to provide excellent quality of service in terms of delay and reliability for users, where they could travel with high mobility (e.g., 500 km/h) and achieve better spectral efficiency. To support these demands, advanced wireless architectures have been proposed, i.e., orthogonal time frequency space (OTFS) modulation and multiple-input multiple-output (MIMO), which are used to handle high mobility communications and increase the network’s spectral efficiency, respectively. Symbol detection in these advanced wireless architectures is essential to satisfy reliability requirements. On the one hand, the optimal maximum likelihood symbol detector is prohibitively complex as its complexity is non-deterministic polynomial-time (NP)-hard. On the other hand, conventional low-complexity symbol detectors pose a significant performance loss compared to the optimal detector. Thus they cannot be used to satisfy high-reliability requirements. One solution to this problem is to develop a low-complexity algorithm that can achieve near-optimal performance in a particular scenario (e.g., M-MIMO). Nevertheless, there are some cases where we cannot design low-complexity algorithms. To alleviate this issue, deep learning networks can be integrated into an existing algorithm and trained using a dataset obtained by simulating a corresponding scenario. In this thesis, we design symbol detectors for advanced wireless architectures (i.e., MIMO and OTFS) to support an excellent quality of service in terms of delay and reliability and better spectral efficiency beyond 5G networks

    Signal Processing and Learning for Next Generation Multiple Access in 6G

    Full text link
    Wireless communication systems to date primarily rely on the orthogonality of resources to facilitate the design and implementation, from user access to data transmission. Emerging applications and scenarios in the sixth generation (6G) wireless systems will require massive connectivity and transmission of a deluge of data, which calls for more flexibility in the design concept that goes beyond orthogonality. Furthermore, recent advances in signal processing and learning have attracted considerable attention, as they provide promising approaches to various complex and previously intractable problems of signal processing in many fields. This article provides an overview of research efforts to date in the field of signal processing and learning for next-generation multiple access, with an emphasis on massive random access and non-orthogonal multiple access. The promising interplay with new technologies and the challenges in learning-based NGMA are discussed

    Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease

    Get PDF
    Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges. The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified. The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing. The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD. The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD

    When Machine Learning Meets Information Theory: Some Practical Applications to Data Storage

    Get PDF
    Machine learning and information theory are closely inter-related areas. In this dissertation, we explore topics in their intersection with some practical applications to data storage. Firstly, we explore how machine learning techniques can be used to improve data reliability in non-volatile memories (NVMs). NVMs, such as flash memories, store large volumes of data. However, as devices scale down towards small feature sizes, they suffer from various kinds of noise and disturbances, thus significantly reducing their reliability. This dissertation explores machine learning techniques to design decoders that make use of natural redundancy (NR) in data for error correction. By NR, we mean redundancy inherent in data, which is not added artificially for error correction. This work studies two different schemes for NR-based error-correcting decoders. In the first scheme, the NR-based decoding algorithm is aware of the data representation scheme (e.g., compression, mapping of symbols to bits, meta-data, etc.), and uses that information for error correction. In the second scenario, the NR-decoder is oblivious of the representation scheme and uses deep neural networks (DNNs) to recognize the file type as well as perform soft decoding on it based on NR. In both cases, these NR-based decoders can be combined with traditional error correction codes (ECCs) to substantially improve their performance. Secondly, we use concepts from ECCs for designing robust DNNs in hardware. Non-volatile memory devices like memristors and phase-change memories are used to store the weights of hardware implemented DNNs. Errors and faults in these devices (e.g., random noise, stuck-at faults, cell-level drifting etc.) might degrade the performance of such DNNs in hardware. We use concepts from analog error-correcting codes to protect the weights of noisy neural networks and to design robust neural networks in hardware. To summarize, this dissertation explores two important directions in the intersection of information theory and machine learning. We explore how machine learning techniques can be useful in improving the performance of ECCs. Conversely, we show how information-theoretic concepts can be used to design robust neural networks in hardware

    Machine Learning Approaches for Faster-than-Nyquist (FTN) Signaling Detection

    Get PDF
    There will be a significant demand on having a fast and reliable wireless communication systems in future. Since bandwidth and bit rate are tightly connected to each other, one approach will be increasing the bandwidth. However, the number of wireless devices are growing exponentially, and we don't have infinite bandwidth to allocate. On the other hand, increasing the bit rate for a given bandwidth, i.e., improving the spectral efficiency (SE), is another promising approach to have a fast and reliable wireless communication systems. Faster-than-Nyquist (FTN) is one of the candidates to improve the SE while this improvement comes at the expense of complexity of removing the introduced inter-symbol interference (ISI). In this thesis, we propose two algorithms to decrease the computational complexity regarding removing the ISI in FTN signaling. In the first main contribution of the thesis, we introduce an equivalent FTN signaling model based on orthonormal basis pulses to transform the non-orthogonal FTN signaling transmission to an orthogonal transmission carrying real-number constellations. Then we propose a deep learning (DL) based algorithm to decrease the computational complexity of the known list sphere decoding (LSD) algorithm. In essence, the LSD is one of the algorithm that can be used for the detection process of the FTN signaling; however, at huge computational complexity. Simulation results show the proposed DL-based LSD reduces computational complexity by orders of magnitude while maintaining close-to-optimal performance. In the second main contribution of the thesis, we view the FTN signaling detection problem as a classification problem, where the received FTN signaling signal viewed as an unlabeled class sample that is an element of a set of all potential classes samples. Assuming receiving NN samples, conventional detectors search over an NN-dimensional space which is computationally expensive especially for large value of NN. However, we propose a low-complexity classifier (LCC) that performs the classification in NpN_p dimensional space where Np≪NN_p\ll N. The proposed LCC's ability to balance performance and complexity is demonstrated by simulation results

    Convolutional Sparse Support Estimator Network (CSEN) : From Energy-Efficient Support Estimation to Learning-Aided Compressive Sensing

    Get PDF
    Support estimation (SE) of a sparse signal refers to finding the location indices of the nonzero elements in a sparse representation. Most of the traditional approaches dealing with SE problems are iterative algorithms based on greedy methods or optimization techniques. Indeed, a vast majority of them use sparse signal recovery (SR) techniques to obtain support sets instead of directly mapping the nonzero locations from denser measurements (e.g., compressively sensed measurements). This study proposes a novel approach for learning such a mapping from a training set. To accomplish this objective, the convolutional sparse support estimator networks (CSENs), each with a compact configuration, are designed. The proposed CSEN can be a crucial tool for the following scenarios: 1) real-time and low-cost SE can be applied in any mobile and low-power edge device for anomaly localization, simultaneous face recognition, and so on and 2) CSEN’s output can directly be used as “prior information,” which improves the performance of sparse SR algorithms. The results over the benchmark datasets show that state-of-the-art performance levels can be achieved by the proposed approach with a significantly reduced computational complexity.publishedVersionPeer reviewe

    Radio Map Estimation: A Data-Driven Approach to Spectrum Cartography

    Full text link
    Radio maps characterize quantities of interest in radio communication environments, such as the received signal strength and channel attenuation, at every point of a geographical region. Radio map estimation typically entails interpolative inference based on spatially distributed measurements. In this tutorial article, after presenting some representative applications of radio maps, the most prominent radio map estimation methods are discussed. Starting from simple regression, the exposition gradually delves into more sophisticated algorithms, eventually touching upon state-of-the-art techniques. To gain insight into this versatile toolkit, illustrative toy examples will also be presented
    • …
    corecore