28 research outputs found
Quantization of Binary-Input Discrete Memoryless Channels
The quantization of the output of a binary-input discrete memoryless channel
to a smaller number of levels is considered. An algorithm which finds an
optimal quantizer, in the sense of maximizing mutual information between the
channel input and the quantizer output is given. This result holds for
arbitrary channels, in contrast to previous results for restricted channels or
a restricted number of quantizer outputs. In the worst case, the algorithm
complexity is cubic in the number of channel outputs . Optimality is
proved using the theorem of Burshtein, Della Pietra, Kanevsky, and N\'adas for
mappings which minimize average impurity for classification and regression
trees.Comment: 9 pages, 5 figures. Source code available at
http://brian.kurkoski.org
Greedy-Merge Degrading has Optimal Power-Law
Consider a channel with a given input distribution. Our aim is to degrade it
to a channel with at most L output letters. One such degradation method is the
so called "greedy-merge" algorithm. We derive an upper bound on the reduction
in mutual information between input and output. For fixed input alphabet size
and variable L, the upper bound is within a constant factor of an
algorithm-independent lower bound. Thus, we establish that greedy-merge is
optimal in the power-law sense.Comment: 5 pages, submitted to ISIT 201
Joint Quantizer Optimization based on Neural Quantizer for Sum-Product Decoder
A low-precision analog-to-digital converter (ADC) is required to implement a
frontend device of wideband digital communication systems in order to reduce
its power consumption. The goal of this paper is to present a novel joint
quantizer optimization method for minimizing lower-precision quantizers matched
to the sum-product algorithms. The principal idea is to introduce a quantizer
that includes a feed-forward neural network and the soft staircase function.
Since the soft staircase function is differentiable and has non-zero gradient
values everywhere, we can exploit backpropagation and a stochastic gradient
descent method to train the feed-forward neural network in the quantizer. The
expected loss regarding the channel input and the decoder output is minimized
in a supervised training phase. The experimental results indicate that the
joint quantizer optimization method successfully provides an 8-level quantizer
for a low-density parity-check (LDPC) code that achieves only a 0.1-dB
performance loss compared to the unquantized system.Comment: 6 page
On the Construction of Polar Codes for Channels with Moderate Input Alphabet Sizes
Current deterministic algorithms for the construction of polar codes can only
be argued to be practical for channels with small input alphabet sizes. In this
paper, we show that any construction algorithm for channels with moderate input
alphabet size which follows the paradigm of "degrading after each polarization
step" will inherently be impractical with respect to a certain "hard"
underlying channel. This result also sheds light on why the construction of
LDPC codes using density evolution is impractical for channels with moderate
sized input alphabets.Comment: 9 page
Single-bit Quantization Capacity of Binary-input Continuous-output Channels
We consider a channel with discrete binary input X that is corrupted by a
given continuous noise to produce a continuous-valued output Y. A quantizer is
then used to quantize the continuous-valued output Y to the final binary output
Z. The goal is to design an optimal quantizer Q* and also find the optimal
input distribution p*(X) that maximizes the mutual information I(X; Z) between
the binary input and the binary quantized output. A linear time complexity
searching procedure is proposed. Based on the properties of the optimal
quantizer and the optimal input distribution, we reduced the searching range
that results in a faster implementation algorithm. Both theoretical and
numerical results are provided to illustrate our method.Comment: arXiv admin note: text overlap with arXiv:2001.0183
Categorical Feature Compression via Submodular Optimization
In the era of big data, learning from categorical features with very large
vocabularies (e.g., 28 million for the Criteo click prediction dataset) has
become a practical challenge for machine learning researchers and
practitioners. We design a highly-scalable vocabulary compression algorithm
that seeks to maximize the mutual information between the compressed
categorical feature and the target binary labels and we furthermore show that
its solution is guaranteed to be within a factor of the
global optimal solution. To achieve this, we introduce a novel
re-parametrization of the mutual information objective, which we prove is
submodular, and design a data structure to query the submodular function in
amortized time (where is the input vocabulary size). Our
complete algorithm is shown to operate in time. Additionally, we
design a distributed implementation in which the query data structure is
decomposed across machines such that each machine only requires space, while still preserving the approximation guarantee and using only
logarithmic rounds of computation. We also provide analysis of simple
alternative heuristic compression methods to demonstrate they cannot achieve
any approximation guarantee. Using the large-scale Criteo learning task, we
demonstrate better performance in retaining mutual information and also verify
competitive learning performance compared to other baseline methods.Comment: Accepted to ICML 2019. Authors are listed in alphabetical orde
Deep Log-Likelihood Ratio Quantization
In this work, a deep learning-based method for log-likelihood ratio (LLR)
lossy compression and quantization is proposed, with emphasis on a single-input
single-output uncorrelated fading communication setting. A deep autoencoder
network is trained to compress, quantize and reconstruct the bit log-likelihood
ratios corresponding to a single transmitted symbol. Specifically, the encoder
maps to a latent space with dimension equal to the number of sufficient
statistics required to recover the inputs - equal to three in this case - while
the decoder aims to reconstruct a noisy version of the latent representation
with the purpose of modeling quantization effects in a differentiable way.
Simulation results show that, when applied to a standard rate-1/2 low-density
parity-check (LDPC) code, a finite precision compression factor of nearly three
times is achieved when storing an entire codeword, with an incurred loss of
performance lower than 0.1 dB compared to straightforward scalar quantization
of the log-likelihood ratios.Comment: Accepted for publication at EUSIPCO 2019. Camera-ready versio
Communication-Channel Optimized Partition
Given an original discrete source X with the distribution p_X that is
corrupted by noise to produce the noisy data Y with the given joint
distribution p(X, Y). A quantizer/classifier Q : Y -> Z is then used to
classify/quantize the data Y to the discrete partitioned output Z with
probability distribution p_Z. Next, Z is transmitted over a deterministic
channel with a given channel matrix A that produces the final discrete output
T. One wants to design the optimal quantizer/classifier Q^* such that the cost
function F(X; T) between the input X and the final output T is minimized while
the probability of the partitioned output Z satisfies a concave constraint
G(p_Z) < C. Our results generalized some famous previous results. First, an
iteration linear time complexity algorithm is proposed to find the local
optimal quantizer. Second, we show that the optimal partition should produce a
hard partition that is equivalent to the cuts by hyper-planes in the
probability space of the posterior probability p(X|Y). This result finally
provides a polynomial-time algorithm to find the globally optimal quantizer.Comment: 5 pages, 1 figur
Entropy-Constrained Maximizing Mutual Information Quantization
In this paper, we investigate the quantization of the output of a binary
input discrete memoryless channel that maximizing the mutual information
between the input and the quantized output under an entropy-constrained of the
quantized output. A polynomial time algorithm is introduced that can find the
truly global optimal quantizer. These results hold for binary input channels
with an arbitrary number of quantized output. Finally, we extend these results
to binary input continuous output channels and show a sufficient condition such
that a single threshold quantizer is an optimal quantizer. Both theoretical
results and numerical results are provided to justify our techniques
LDPC Decoding with Limited-Precision Soft Information in Flash Memories
This paper investigates the application of low-density parity-check (LDPC)
codes to Flash memories. Multiple cell reads with distinct word-line voltages
provide limited-precision soft information for the LDPC decoder. The values of
the word-line voltages (also called reference voltages) are optimized by
maximizing the mutual information (MI) between the input and output of the
multiple-read channel. Constraining the maximum mutual-information (MMI)
quantization to enforce a constant-ratio constraint provides a significant
simplification with no noticeable loss in performance.
Our simulation results suggest that for a well-designed LDPC code, the
quantization that maximizes the mutual information will also minimize the frame
error rate. However, care must be taken to design the code to perform well in
the quantized channel. An LDPC code designed for a full-precision Gaussian
channel may perform poorly in the quantized setting. Our LDPC code designs
provide an example where quantization increases the importance of absorbing
sets thus changing how the LDPC code should be optimized.
Simulation results show that small increases in precision enable the LDPC
code to significantly outperform a BCH code with comparable rate and block
length (but without the benefit of the soft information) over a range of frame
error rates