7,658 research outputs found
Deep Learning Based on Orthogonal Approximate Message Passing for CP-Free OFDM
Channel estimation and signal detection are very challenging for an
orthogonal frequency division multiplexing (OFDM) system without cyclic prefix
(CP). In this article, deep learning based on orthogonal approximate message
passing (DL-OAMP) is used to address these problems. The DL-OAMP receiver
includes a channel estimation neural network (CE-Net) and a signal detection
neural network based on OAMP, called OAMP-Net. The CE-Net is initialized by the
least square channel estimation algorithm and refined by minimum mean-squared
error (MMSE) neural network. The OAMP-Net is established by unfolding the
iterative OAMP algorithm and adding some trainable parameters to improve the
detection performance. The DL-OAMP receiver is with low complexity and can
estimate time-varying channels with only a single training. Simulation results
demonstrate that the bit-error rate (BER) of the proposed scheme is lower than
those of competitive algorithms for high-order modulation.Comment: 5 pages, 4 figures, updated manuscript, International Conference on
Acoustics, Speech and Signal Processing (ICASSP 2019). arXiv admin note:
substantial text overlap with arXiv:1903.0476
Segatron: Segment-Aware Transformer for Language Modeling and Understanding
Transformers are powerful for sequence modeling. Nearly all state-of-the-art
language models and pre-trained language models are based on the Transformer
architecture. However, it distinguishes sequential tokens only with the token
position index. We hypothesize that better contextual representations can be
generated from the Transformer with richer positional information. To verify
this, we propose a segment-aware Transformer (Segatron), by replacing the
original token position encoding with a combined position encoding of
paragraph, sentence, and token. We first introduce the segment-aware mechanism
to Transformer-XL, which is a popular Transformer-based language model with
memory extension and relative position encoding. We find that our method can
further improve the Transformer-XL base model and large model, achieving 17.1
perplexity on the WikiText-103 dataset. We further investigate the pre-training
masked language modeling task with Segatron. Experimental results show that
BERT pre-trained with Segatron (SegaBERT) can outperform BERT with vanilla
Transformer on various NLP tasks, and outperforms RoBERTa on zero-shot sentence
representation learning.Comment: Accepted by AAAI 202
Triminimal Parametrization of Quark Mixing Matrix
Starting from a new zeroth order basis for quark mixing (CKM) matrix based on
the quark-lepton complementarity and the tri-bimaximal pattern of lepton
mixing, we derive a triminimal parametrization of CKM matrix with three small
angles and a CP-violating phase as its parameters. This new triminimal
parametrization has the merits of fast convergence and simplicity in
application. With the quark-lepton complementary relations, we derive relations
between the two unified triminimal parametrizations for quark mixing obtained
in this work and for lepton mixing obtained by Pakvasa-Rodejohann-Weiler.
Parametrization deviating from quark-lepton complementarity is also discussed.Comment: 9 pages, no figur
- …