2,903 research outputs found
Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding
Despite its reduced complexity, lattice reduction-aided decoding exhibits a
widening gap to maximum-likelihood (ML) performance as the dimension increases.
To improve its performance, this paper presents randomized lattice decoding
based on Klein's sampling technique, which is a randomized version of Babai's
nearest plane algorithm (i.e., successive interference cancelation (SIC)). To
find the closest lattice point, Klein's algorithm is used to sample some
lattice points and the closest among those samples is chosen. Lattice reduction
increases the probability of finding the closest lattice point, and only needs
to be run once during pre-processing. Further, the sampling can operate very
efficiently in parallel. The technical contribution of this paper is two-fold:
we analyze and optimize the decoding radius of sampling decoding resulting in
better error performance than Klein's original algorithm, and propose a very
efficient implementation of random rounding. Of particular interest is that a
fixed gain in the decoding radius compared to Babai's decoding can be achieved
at polynomial complexity. The proposed decoder is useful for moderate
dimensions where sphere decoding becomes computationally intensive, while
lattice reduction-aided decoding starts to suffer considerable loss. Simulation
results demonstrate near-ML performance is achieved by a moderate number of
samples, even if the dimension is as high as 32
2-D Compass Codes
The compass model on a square lattice provides a natural template for
building subsystem stabilizer codes. The surface code and the Bacon-Shor code
represent two extremes of possible codes depending on how many gauge qubits are
fixed. We explore threshold behavior in this broad class of local codes by
trading locality for asymmetry and gauge degrees of freedom for stabilizer
syndrome information. We analyze these codes with asymmetric and spatially
inhomogeneous Pauli noise in the code capacity and phenomenological models. In
these idealized settings, we observe considerably higher thresholds against
asymmetric noise. At the circuit level, these codes inherit the bare-ancilla
fault-tolerance of the Bacon-Shor code.Comment: 10 pages, 7 figures, added discussion on fault-toleranc
Hardness of Bounded Distance Decoding on Lattices in ?_p Norms
Bounded Distance Decoding BDD_{p,?} is the problem of decoding a lattice when the target point is promised to be within an ? factor of the minimum distance of the lattice, in the ?_p norm. We prove that BDD_{p, ?} is NP-hard under randomized reductions where ? ? 1/2 as p ? ? (and for ? = 1/2 when p = ?), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,?}. For example, we prove that for all p ? [1,?) ? 2? and constants C > 1, ? > 0, there is no 2^((1-?)n/C)-time algorithm for BDD_{p,?} for some constant ? (which approaches 1/2 as p ? ?), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available.
Compared to prior work on the hardness of BDD_{p,?} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of ? for which the problem is known to be NP-hard for all p > p? ? 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in ?_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018)
Lattice Gaussian Sampling by Markov Chain Monte Carlo: Bounded Distance Decoding and Trapdoor Sampling
Sampling from the lattice Gaussian distribution plays an important role in
various research fields. In this paper, the Markov chain Monte Carlo
(MCMC)-based sampling technique is advanced in several fronts. Firstly, the
spectral gap for the independent Metropolis-Hastings-Klein (MHK) algorithm is
derived, which is then extended to Peikert's algorithm and rejection sampling;
we show that independent MHK exhibits faster convergence. Then, the performance
of bounded distance decoding using MCMC is analyzed, revealing a flexible
trade-off between the decoding radius and complexity. MCMC is further applied
to trapdoor sampling, again offering a trade-off between security and
complexity. Finally, the independent multiple-try Metropolis-Klein (MTMK)
algorithm is proposed to enhance the convergence rate. The proposed algorithms
allow parallel implementation, which is beneficial for practical applications.Comment: submitted to Transaction on Information Theor
Markov Chain Monte Carlo Algorithms for Lattice Gaussian Sampling
Sampling from a lattice Gaussian distribution is emerging as an important
problem in various areas such as coding and cryptography. The default sampling
algorithm --- Klein's algorithm yields a distribution close to the lattice
Gaussian only if the standard deviation is sufficiently large. In this paper,
we propose the Markov chain Monte Carlo (MCMC) method for lattice Gaussian
sampling when this condition is not satisfied. In particular, we present a
sampling algorithm based on Gibbs sampling, which converges to the target
lattice Gaussian distribution for any value of the standard deviation. To
improve the convergence rate, a more efficient algorithm referred to as
Gibbs-Klein sampling is proposed, which samples block by block using Klein's
algorithm. We show that Gibbs-Klein sampling yields a distribution close to the
target lattice Gaussian, under a less stringent condition than that of the
original Klein algorithm.Comment: 5 pages, 1 figure, IEEE International Symposium on Information
Theory(ISIT) 201
Construction of Capacity-Achieving Lattice Codes: Polar Lattices
In this paper, we propose a new class of lattices constructed from polar
codes, namely polar lattices, to achieve the capacity \frac{1}{2}\log(1+\SNR)
of the additive white Gaussian-noise (AWGN) channel. Our construction follows
the multilevel approach of Forney \textit{et al.}, where we construct a
capacity-achieving polar code on each level. The component polar codes are
shown to be naturally nested, thereby fulfilling the requirement of the
multilevel lattice construction. We prove that polar lattices are
\emph{AWGN-good}. Furthermore, using the technique of source polarization, we
propose discrete Gaussian shaping over the polar lattice to satisfy the power
constraint. Both the construction and shaping are explicit, and the overall
complexity of encoding and decoding is for any fixed target error
probability.Comment: full version of the paper to appear in IEEE Trans. Communication
Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One
We show the first dimension-preserving search-to-decision reductions for
approximate SVP and CVP. In particular, for any ,
we obtain an efficient dimension-preserving reduction from -SVP to -GapSVP and an efficient dimension-preserving reduction
from -CVP to -GapCVP. These results generalize the known
equivalences of the search and decision versions of these problems in the exact
case when . For SVP, we actually obtain something slightly stronger
than a search-to-decision reduction---we reduce -SVP to
-unique SVP, a potentially easier problem than -GapSVP.Comment: Updated to acknowledge additional prior wor
- …