35 research outputs found
Protograph-Based LDPC Code Design for Probabilistic Shaping with On-Off Keying
This work investigates protograph-based LDPC codes for the AWGN channel with
OOK modulation. A non-uniform distribution of the OOK modulation symbols is
considered to improve the power efficiency especially for low SNRs. To this
end, a specific transmitter architecture based on time sharing is proposed that
allows probabilistic shaping of (some) OOK modulation symbols. Tailored
protograph-based LDPC code designs outperform standard schemes with uniform
signaling and off-the-shelf codes by 1.1 dB for a transmission rate of 0.25
bits/channel use.Comment: Invited Paper for CISS 201
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
In this paper, we provide for the first time a systematic comparison of
distribution matching (DM) and sphere shaping (SpSh) algorithms for short
blocklength probabilistic amplitude shaping. For asymptotically large
blocklengths, constant composition distribution matching (CCDM) is known to
generate the target capacity-achieving distribution. As the blocklength
decreases, however, the resulting rate loss diminishes the efficiency of CCDM.
We claim that for such short blocklengths and over the additive white Gaussian
channel (AWGN), the objective of shaping should be reformulated as obtaining
the most energy-efficient signal space for a given rate (rather than matching
distributions). In light of this interpretation, multiset-partition DM (MPDM),
enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as
energy-efficient shaping techniques. Numerical results show that MPDM and SpSh
have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize
the energy efficiency--is shown to have the minimum rate loss amongst all. We
provide simulation results of the end-to-end decoding performance showing that
up to 1 dB improvement in power efficiency over uniform signaling can be
obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a
discussion on the complexity of these algorithms from the perspective of
latency, storage and computations.Comment: 18 pages, 10 figure
Application-Based Coexistence of Different Waveforms on Non-orthogonal Multiple Access
The coexistence of different wireless communication systems such as LTE and
Wi-Fi by sharing the unlicensed band is well studied in the literature. In
these studies, various methods are proposed to support the coexistence of
systems, including listen-before-talk mechanism, joint user association and
resource allocation. However, in this study, the coexistence of different
waveform structures in the same resource elements are studied under the theory
of non-orthogonal multiple access. This study introduces a paradigm-shift on
NOMA towards the application-centric waveform coexistence. Throughout the
paper, the coexistence of different waveforms is explained with two specific
use cases, which are power-balanced NOMA and joint radar-sensing and
communication with NOMA. In addition, some of the previous works in the
literature regarding non-orthogonal waveform coexistence are reviewed. However,
the concept is not limited to these use cases. With the rapid development of
wireless technology, next-generation wireless systems are proposed to be
flexible and hybrid, having different kinds of capabilities such as sensing,
security, intelligence, control, and computing. Therefore, the concept of
different waveforms' coexistence to meet these concerns are becoming impressive
for researchers.Comment: Submitted to IEEE for possible publication. arXiv admin note: text
overlap with arXiv:2007.05753, arXiv:2003.0554
Integer-Forcing Linear Receivers
Linear receivers are often used to reduce the implementation complexity of
multiple-antenna systems. In a traditional linear receiver architecture, the
receive antennas are used to separate out the codewords sent by each transmit
antenna, which can then be decoded individually. Although easy to implement,
this approach can be highly suboptimal when the channel matrix is near
singular. This paper develops a new linear receiver architecture that uses the
receive antennas to create an effective channel matrix with integer-valued
entries. Rather than attempting to recover transmitted codewords directly, the
decoder recovers integer combinations of the codewords according to the entries
of the effective channel matrix. The codewords are all generated using the same
linear code which guarantees that these integer combinations are themselves
codewords. Provided that the effective channel is full rank, these integer
combinations can then be digitally solved for the original codewords. This
paper focuses on the special case where there is no coding across transmit
antennas and no channel state information at the transmitter(s), which
corresponds either to a multi-user uplink scenario or to single-user V-BLAST
encoding. In this setting, the proposed integer-forcing linear receiver
significantly outperforms conventional linear architectures such as the
zero-forcing and linear MMSE receiver. In the high SNR regime, the proposed
receiver attains the optimal diversity-multiplexing tradeoff for the standard
MIMO channel with no coding across transmit antennas. It is further shown that
in an extended MIMO model with interference, the integer-forcing linear
receiver achieves the optimal generalized degrees-of-freedom.Comment: 40 pages, 16 figures, to appear in the IEEE Transactions on
Information Theor
Unequal Error Protection Raptor Codes
We design Unequal Error Protection (UEP) Raptor codes with the UEP property provided by the precode part of Raptor codes which is usually a Low Density Parity Check (LDPC) code. Existing UEP Raptor codes apply the UEP property on the Luby transform (LT) code part of Raptor codes. This approach lowers the bit erasure rate (BER) of the more important bits (MIB) of the data decoded by the LT part of the decoder of Raptor code at the expense of degrading the BER performance of Less Important Bits (LIB), and hence the overall BER of the data passed from the LT part to the LDPC part of the decoder is higher compared to the case of using an Equal Error Protection (EEP) LT code. The proposed UEP Raptor code design has the structure of UEP LDPC code and EEP LT code so that it has the advantage of passing data blocks with lower BER from the LT code part to the LDPC code part of the decoder. This advantage is translated into improved performance in terms of required overhead and achieved BER on both the MIB bits and LIB bits of the decoded data compared to UEP Raptor codes applying the UEP property on the LT part. We propose two design schemes. The first combines a partially regular LDPC code which has UEP properties with an EEP LT code, and the second scheme uses two LDPC codes with different code rates in the precode part such that the MIB bits are encoded using the LDPC code with lower rate and the LT part is EEP. Simulations of both designs exhibit improved BER performance on both the MIB bits and LIB bits while consuming smaller overheads. The second design can be used to provide unequal protection for cases where the MIB bits comprise a fraction of more than 0.4 of the source data which is a case where UEP Raptor codes with UEP LT codes perform poorly
Decryption Failure Attacks on Post-Quantum Cryptography
This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results