351 research outputs found

    Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip

    Full text link
    Although Shannon theory states that it is asymptotically optimal to separate the source and channel coding as two independent processes, in many practical communication scenarios this decomposition is limited by the finite bit-length and computational power for decoding. Recently, neural joint source-channel coding (NECST) is proposed to sidestep this problem. While it leverages the advancements of amortized inference and deep learning to improve the encoding and decoding process, it still cannot always achieve compelling results in terms of compression and error correction performance due to the limited robustness of its learned coding networks. In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme. More specifically, on the encoder side, we propose to explicitly maximize the mutual information between the codeword and data; while on the decoder side, the amortized reconstruction is regularized within an adversarial framework. Extensive experiments conducted on various real-world datasets evidence that our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.Comment: AAAI202

    Iterative decoding scheme for cooperative communications

    Get PDF

    Decoding of Decode and Forward (DF) Relay Protocol using Min-Sum Based Low Density Parity Check (LDPC) System

    Get PDF
    Decoding high complexity is a major issue to design a decode and forward (DF) relay protocol. Thus, the establishment of low complexity decoding system would beneficial to assist decode and forward relay protocol. This paper reviews existing methods for the min-sum based LDPC decoding system as the low complexity decoding system. Reference lists of chosen articles were further reviewed for associated publications. This paper introduces comprehensive system model representing and describing the methods developed for LDPC based for DF relay protocol. It is consists of a number of components: (1) encoder and modulation at the source node, (2) demodulation, decoding, encoding and modulation at relay node, and (3) demodulation and decoding at the destination node. This paper also proposes a new taxonomy for min-sum based LDPC decoding techniques, highlights some of the most important components such as data used, result performances and profiles the Variable and Check Node (VCN) operation methods that have the potential to be used in DF relay protocol. Min-sum based LDPC decoding methods have the potential to provide an objective measure the best tradeoff between low complexities decoding process and the decoding error performance, and emerge as a cost-effective solution for practical application

    LDPC-coded modulation for transmission over AWGN and flat rayleigh fading channels

    Get PDF
    La modulation codée est une technique de transmission efficace en largeur de bande qui intègre le codage de canal et la modulation en une seule entité et ce, afin d'améliorer les performances tout en conservant la même efficacité spectrale comparé à la modulation non codée. Les codes de parité à faible densité (low-density parity-check codes, LDPC) sont les codes correcteurs d'erreurs les plus puissants et approchent la limite de Shannon, tout en ayant une complexité de décodage relativement faible. L'idée de combiner les codes LDPC et la modulation efficace en largeur de bande a donc été considérée par de nombreux chercheurs. Dans ce mémoire, nous étudions une méthode de modulation codée à la fois puissante et efficace en largeur de bande, ayant d'excellentes performances de taux d'erreur binaire et une complexité d'implantation faible. Ceci est réalisé en utilisant un encodeur rapide, un décoder de faible complexité et aucun entrelaceur. Les performances du système proposé pour des transmissions sur un canal additif gaussien blanc et un canal à évanouissements plats de Rayleigh sont évaluées au moyen de simulations. Les résultats numériques montrent que la méthode de modulation codée utilisant la modulation d'amplitude en quadrature à M niveaux (M-QAM) peut atteindre d'excellentes performances pour toute une gamme d'efficacité spectrale. Une autre contribution de ce mémoire est une méthode simple pour réaliser une modulation codée adaptative avec les codes LDPC pour la transmission sur des canaux à évanouissements plats et lents de Rayleigh. Dans cette méthode, six combinaisons de paires encodeur modulateur sont employées pour une adaptation trame par trame. L'efficacité spectrale moyenne varie entre 0.5 et 5 bits/s/Hz lors de la transmission. Les résultats de simulation montrent que la modulation codée adaptative avec les codes LDPC offre une meilleure efficacité spectrale tout en maintenant une performance d'erreur acceptable

    Mobile and Wireless Communications

    Get PDF
    Mobile and Wireless Communications have been one of the major revolutions of the late twentieth century. We are witnessing a very fast growth in these technologies where mobile and wireless communications have become so ubiquitous in our society and indispensable for our daily lives. The relentless demand for higher data rates with better quality of services to comply with state-of-the art applications has revolutionized the wireless communication field and led to the emergence of new technologies such as Bluetooth, WiFi, Wimax, Ultra wideband, OFDMA. Moreover, the market tendency confirms that this revolution is not ready to stop in the foreseen future. Mobile and wireless communications applications cover diverse areas including entertainment, industrialist, biomedical, medicine, safety and security, and others, which definitely are improving our daily life. Wireless communication network is a multidisciplinary field addressing different aspects raging from theoretical analysis, system architecture design, and hardware and software implementations. While different new applications are requiring higher data rates and better quality of service and prolonging the mobile battery life, new development and advanced research studies and systems and circuits designs are necessary to keep pace with the market requirements. This book covers the most advanced research and development topics in mobile and wireless communication networks. It is divided into two parts with a total of thirty-four stand-alone chapters covering various areas of wireless communications of special topics including: physical layer and network layer, access methods and scheduling, techniques and technologies, antenna and amplifier design, integrated circuit design, applications and systems. These chapters present advanced novel and cutting-edge results and development related to wireless communication offering the readers the opportunity to enrich their knowledge in specific topics as well as to explore the whole field of rapidly emerging mobile and wireless networks. We hope that this book will be useful for students, researchers and practitioners in their research studies

    Bootstrapped Masked Autoencoders for Vision BERT Pretraining

    Full text link
    We propose bootstrapped masked autoencoders (BootMAE), a new approach for vision BERT pretraining. BootMAE improves the original masked autoencoders (MAE) with two core designs: 1) momentum encoder that provides online feature as extra BERT prediction targets; 2) target-aware decoder that tries to reduce the pressure on the encoder to memorize target-specific information in BERT pretraining. The first design is motivated by the observation that using a pretrained MAE to extract the features as the BERT prediction target for masked tokens can achieve better pretraining performance. Therefore, we add a momentum encoder in parallel with the original MAE encoder, which bootstraps the pretraining performance by using its own representation as the BERT prediction target. In the second design, we introduce target-specific information (e.g., pixel values of unmasked patches) from the encoder directly to the decoder to reduce the pressure on the encoder of memorizing the target-specific information. Thus, the encoder focuses on semantic modeling, which is the goal of BERT pretraining, and does not need to waste its capacity in memorizing the information of unmasked tokens related to the prediction target. Through extensive experiments, our BootMAE achieves 84.2%84.2\% Top-1 accuracy on ImageNet-1K with ViT-B backbone, outperforming MAE by +0.8%+0.8\% under the same pre-training epochs. BootMAE also gets +1.0+1.0 mIoU improvements on semantic segmentation on ADE20K and +1.3+1.3 box AP, +1.4+1.4 mask AP improvement on object detection and segmentation on COCO dataset. Code is released at https://github.com/LightDXY/BootMAE.Comment: ECCV 2022, code is available at https://github.com/LightDXY/BootMA

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results

    Towards Better Image Embeddings Using Neural Networks

    Get PDF
    The primary focus of this dissertation is to study image embeddings extracted by neural networks. Deep Learning (DL) is preferred over traditional Machine Learning (ML) for the reason that feature representations can be automatically constructed from data without human involvement. On account of the effectiveness of deep features, the last decade has witnessed unprecedented advances in Computer Vision (CV), and more real-world applications are expected to be introduced in the coming years. A diverse collection of studies has been included, covering areas such as person re-identification, vehicle attribute recognition, neural image compression, clustering and unsupervised anomaly detection. More specifically, three aspects of feature representations have been thoroughly analyzed. Firstly, features should be distinctive, i.e., features of samples from distinct categories ought to differ significantly. Extracting distinctive features is essential for image retrieval systems, in which an algorithm finds the gallery sample that is closest to a query sample. Secondly, features should be privacy-preserving, i.e., inferring sensitive information from features must be infeasible. With the widespread adoption of Machine Learning as a Service (MLaaS), utilizing privacy-preserving features prevents privacy violations even if the server has been compromised. Thirdly, features should be compressible, i.e., compact features are preferable as they require less storage space. Obtaining compressible features plays a vital role in data compression. Towards the goal of deriving distinctive, privacy-preserving and compressible feature representations, research articles included in this dissertation reveal different approaches to improving image embeddings learned by neural networks. This topic remains a fundamental challenge in Machine Learning, and further research is needed to gain a deeper understanding
    • …
    corecore