620 research outputs found
Hamming Compressed Sensing
Compressed sensing (CS) and 1-bit CS cannot directly recover quantized
signals and require time consuming recovery. In this paper, we introduce
\textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit
quantized signal of dimensional from its 1-bit measurements via invoking
times of Kullback-Leibler divergence based nearest neighbor search.
Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes
considerably less (linear) recovery time and requires substantially less
measurements (). Moreover, HCS recovery can accelerate the
subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of
HCS for general signals and "HCS+dequantizer" recovery error bound for sparse
signals. Extensive numerical simulations verify the appealing accuracy,
robustness, efficiency and consistency of HCS.Comment: 33 pages, 8 figure
Bilateral Random Projections
Low-rank structure have been profoundly studied in data mining and machine
learning. In this paper, we show a dense matrix 's low-rank approximation
can be rapidly built from its left and right random projections and
, or bilateral random projection (BRP). We then show power scheme
can further improve the precision. The deterministic, average and deviation
bounds of the proposed method and its power scheme modification are proved
theoretically. The effectiveness and the efficiency of BRP based low-rank
approximation is empirically verified on both artificial and real datasets.Comment: 17 pages, 3 figures, technical repor
Unsupervised Domain Adaptation on Reading Comprehension
Reading comprehension (RC) has been studied in a variety of datasets with the
boosted performance brought by deep neural networks. However, the
generalization capability of these models across different domains remains
unclear. To alleviate this issue, we are going to investigate unsupervised
domain adaptation on RC, wherein a model is trained on labeled source domain
and to be applied to the target domain with only unlabeled samples. We first
show that even with the powerful BERT contextual representation, the
performance is still unsatisfactory when the model trained on one dataset is
directly applied to another target dataset. To solve this, we provide a novel
conditional adversarial self-training method (CASe). Specifically, our approach
leverages a BERT model fine-tuned on the source dataset along with the
confidence filtering to generate reliable pseudo-labeled samples in the target
domain for self-training. On the other hand, it further reduces domain
distribution discrepancy through conditional adversarial learning across
domains. Extensive experiments show our approach achieves comparable accuracy
to supervised models on multiple large-scale benchmark datasets.Comment: 8 pages, 6 figures, 5 tables, Accepted by AAAI 202
- β¦