548 research outputs found
Analysis and Design of Non-Orthogonal Multiple Access (NOMA) Techniques for Next Generation Wireless Communication Systems
The current surge in wireless connectivity, anticipated to amplify significantly in future wireless technologies, brings a new wave of users. Given the impracticality of an endlessly expanding bandwidth, there’s a pressing need for communication techniques that efficiently serve this burgeoning user base with limited resources. Multiple Access (MA) techniques, notably Orthogonal Multiple Access (OMA), have long addressed bandwidth constraints. However, with escalating user numbers, OMA’s orthogonality becomes limiting for emerging wireless technologies. Non-Orthogonal Multiple Access (NOMA), employing superposition coding, serves more users within the same bandwidth as OMA by allocating different power levels to users whose signals can then be detected using the gap between them, thus offering superior spectral efficiency and massive connectivity. This thesis examines the integration of NOMA techniques with cooperative relaying, EXtrinsic Information Transfer (EXIT) chart analysis, and deep learning for enhancing 6G and beyond communication systems. The adopted methodology aims to optimize the systems’ performance, spanning from bit-error rate (BER) versus signal to noise ratio (SNR) to overall system efficiency and data rates. The primary focus of this thesis is the investigation of the integration of NOMA with cooperative relaying, EXIT chart analysis, and deep learning techniques. In the cooperative relaying context, NOMA notably improved diversity gains, thereby proving the superiority of combining NOMA with cooperative relaying over just NOMA. With EXIT chart analysis, NOMA achieved low BER at mid-range SNR as well as achieved optimal user fairness in the power allocation stage. Additionally, employing a trained neural network enhanced signal detection for NOMA in the deep learning scenario, thereby producing a simpler signal detection for NOMA which addresses NOMAs’ complex receiver problem
Information Encoding for Flow Watermarking and Binding Keys to Biometric Data
Due to the current level of telecommunications development, fifth-generation (5G) communication systems are expected to provide higher data rates, lower latency, and improved scalability. To ensure the security and reliability of data traffic generated from wireless sources, 5G networks must be designed to support security protocols and reliable communication applications. The operations of coding and processing of information during the transmission of both binary and non-binary data in nonstandard communication channels are described. A subclass of linear binary codes is considered, which are both Varshamov-Tenengolz codes and are used for channels with insertions and deletions of symbols. The use of these codes is compared with Hidden Markov Model (HMM)-based systems for detecting intrusions in networks using flow watermarking, which provide high true positive rate in both cases. The principles of using Bose-Chadhuri-Hocquenhgem (BCH) codes, non-binary Reed-Solomon codes, and turbo codes, as well as concatenated code structures to ensure noise immunity when reproducing information in Helper-Data Systems are considered. Examples of biometric systems organization based on the use of these codes, operating on the basis of the Fuzzy Commitment Scheme (FCS) and providing FRRÂ <Â 1% for authentication, are given
Factor Graph Neural Networks
In recent years, we have witnessed a surge of Graph Neural Networks (GNNs),
most of which can learn powerful representations in an end-to-end fashion with
great success in many real-world applications. They have resemblance to
Probabilistic Graphical Models (PGMs), but break free from some limitations of
PGMs. By aiming to provide expressive methods for representation learning
instead of computing marginals or most likely configurations, GNNs provide
flexibility in the choice of information flowing rules while maintaining good
performance. Despite their success and inspirations, they lack efficient ways
to represent and learn higher-order relations among variables/nodes. More
expressive higher-order GNNs which operate on k-tuples of nodes need increased
computational resources in order to process higher-order tensors. We propose
Factor Graph Neural Networks (FGNNs) to effectively capture higher-order
relations for inference and learning. To do so, we first derive an efficient
approximate Sum-Product loopy belief propagation inference algorithm for
discrete higher-order PGMs. We then neuralize the novel message passing scheme
into a Factor Graph Neural Network (FGNN) module by allowing richer
representations of the message update rules; this facilitates both efficient
inference and powerful end-to-end learning. We further show that with a
suitable choice of message aggregation operators, our FGNN is also able to
represent Max-Product belief propagation, providing a single family of
architecture that can represent both Max and Sum-Product loopy belief
propagation. Our extensive experimental evaluation on synthetic as well as real
datasets demonstrates the potential of the proposed model.Comment: Accepted by JML
A Tutorial on Coding Methods for DNA-based Molecular Communications and Storage
Exponential increase of data has motivated advances of data storage
technologies. As a promising storage media, DeoxyriboNucleic Acid (DNA) storage
provides a much higher data density and superior durability, compared with
state-of-the-art media. In this paper, we provide a tutorial on DNA storage and
its role in molecular communications. Firstly, we introduce fundamentals of
DNA-based molecular communications and storage (MCS), discussing the basic
process of performing DNA storage in MCS. Furthermore, we provide tutorials on
how conventional coding schemes that are used in wireless communications can be
applied to DNA-based MCS, along with numerical results. Finally, promising
research directions on DNA-based data storage in molecular communications are
introduced and discussed in this paper
Achievable Information Rates and Concatenated Codes for the DNA Nanopore Sequencing Channel
The errors occurring in DNA-based storage are correlated in nature, which is
a direct consequence of the synthesis and sequencing processes. In this paper,
we consider the memory- nanopore channel model recently introduced by Hamoum
et al., which models the inherent memory of the channel. We derive the maximum
a posteriori (MAP) decoder for this channel model. The derived MAP decoder
allows us to compute achievable information rates for the true DNA storage
channel assuming a mismatched decoder matched to the memory- nanopore
channel model, and quantify the loss in performance assuming a small memory
length--and hence limited decoding complexity. Furthermore, the derived MAP
decoder can be used to design error-correcting codes tailored to the DNA
storage channel. We show that a concatenated coding scheme with an outer
low-density parity-check code and an inner convolutional code yields excellent
performance.Comment: This paper has been accepted and awaiting publication in informatio
theory workshop (ITW) 202
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning
The paper introduces the application of information geometry to describe the
ground states of Ising models by utilizing parity-check matrices of cyclic and
quasi-cyclic codes on toric and spherical topologies. The approach establishes
a connection between machine learning and error-correcting coding. This
proposed approach has implications for the development of new embedding methods
based on trapping sets. Statistical physics and number geometry applied for
optimize error-correcting codes, leading to these embedding and sparse
factorization methods. The paper establishes a direct connection between DNN
architecture and error-correcting coding by demonstrating how state-of-the-art
architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range
arena can be equivalent to of block and convolutional LDPC codes (Cage-graph,
Repeat Accumulate). QC codes correspond to certain types of chemical elements,
with the carbon element being represented by the mixed automorphism
Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and
the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix
are elaborated upon in detail. The Quantum Approximate Optimization Algorithm
(QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous
to the back-propagation loss function landscape in training DNNs. This
similarity creates a comparable problem with TS pseudo-codeword, resembling the
belief propagation method. Additionally, the layer depth in QAOA correlates to
the number of decoding belief propagation iterations in the Wiberg decoding
tree. Overall, this work has the potential to advance multiple fields, from
Information Theory, DNN architecture design (sparse and structured prior graph
topology), efficient hardware design for Quantum and Classical DPU/TPU (graph,
quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text
overlap with arXiv:2109.08184 by other author
Neural Distributed Compressor Discovers Binning
We consider lossy compression of an information source when the decoder has
lossless access to a correlated one. This setup, also known as the Wyner-Ziv
problem, is a special case of distributed source coding. To this day, practical
approaches for the Wyner-Ziv problem have neither been fully developed nor
heavily investigated. We propose a data-driven method based on machine learning
that leverages the universal function approximation capability of artificial
neural networks. We find that our neural network-based compression scheme,
based on variational vector quantization, recovers some principles of the
optimum theoretical solution of the Wyner-Ziv setup, such as binning in the
source space as well as optimal combination of the quantization index and side
information, for exemplary sources. These behaviors emerge although no
structure exploiting knowledge of the source distributions was imposed. Binning
is a widely used tool in information theoretic proofs and methods, and to our
knowledge, this is the first time it has been explicitly observed to emerge
from data-driven learning.Comment: draft of a journal version of our previous ISIT 2023 paper (available
at: arXiv:2305.04380). arXiv admin note: substantial text overlap with
arXiv:2305.0438
Architecture and Advanced Electronics Pathways Toward Highly Adaptive Energy- Efficient Computing
With the explosion of the number of compute nodes, the bottleneck of future computing systems lies in the network architecture connecting the nodes. Addressing the bottleneck requires replacing current backplane-based network topologies. We propose to revolutionize computing electronics by realizing embedded optical waveguides for onboard networking and wireless chip-to-chip links at 200-GHz carrier frequency connecting neighboring boards in a rack. The control of novel rate-adaptive optical and mm-wave transceivers needs tight interlinking with the system software for runtime resource management
Compression Ratio Learning and Semantic Communications for Video Imaging
Camera sensors have been widely used in intelligent robotic systems.
Developing camera sensors with high sensing efficiency has always been
important to reduce the power, memory, and other related resources. Inspired by
recent success on programmable sensors and deep optic methods, we design a
novel video compressed sensing system with spatially-variant compression
ratios, which achieves higher imaging quality than the existing snapshot
compressed imaging methods with the same sensing costs. In this article, we
also investigate the data transmission methods for programmable sensors, where
the performance of communication systems is evaluated by the reconstructed
images or videos rather than the transmission of sensor data itself. Usually,
different reconstruction algorithms are designed for applications in high
dynamic range imaging, video compressive sensing, or motion debluring. This
task-aware property inspires a semantic communication framework for
programmable sensors. In this work, a policy-gradient based reinforcement
learning method is introduced to achieve the explicit trade-off between the
compression (or transmission) rate and the image distortion. Numerical results
show the superiority of the proposed methods over existing baselines
- …