84 research outputs found

    Lattice sampling algorithms for communications

    No full text
    In this thesis, we investigate the problem of decoding for wireless communications from the perspective of lattice sampling. In particular, computationally efficient lattice sampling algorithms are exploited to enhance the system performance, which enjoys the system tradeoff between performance and complexity through the sample size. Based on this idea, several novel lattice sampling algorithms are presented in this thesis. First of all, in order to address the inherent issues in the random sampling, derandomized sampling algorithm is proposed. Specifically, by setting a probability threshold to sample candidates, the whole sampling procedure becomes deterministic, leading to considerable performance improvement and complexity reduction over to the randomized sampling. According to the analysis and optimization, the correct decoding radius is given with the optimized parameter setting. Moreover, the upper bound on the sample size, which corresponds to near-maximum likelihood (ML) performance, is also derived. After that, the proposed derandomized sampling algorithm is introduced into the soft-output decoding of MIMO bit-interleaved coded modulation (BICM) systems to further improve the decoding performance. According to the demonstration, we show that the derandomized sampling algorithm is able to achieve the near-maximum a posteriori (MAP) performance in the soft-output decoding. We then extend the well-known Markov Chain Monte Carlo methods into the samplings from lattice Gaussian distribution, which has emerged as a common theme in lattice coding and decoding, cryptography, mathematics. We firstly show that the statistical Gibbs sampling is capable to perform the lattice Gaussian sampling. Then, a more efficient algorithm referred to as Gibbs-Klein sampling is proposed, which samples multiple variables block by block using Klein’s algorithm. After that, for the sake of convergence rate, we introduce the conventional statistical Metropolis-Hastings (MH) sampling into lattice Gaussian distributions and three MH-based sampling algorithms are then proposed. The first one, named as MH multivariate sampling algorithm, is demonstrated to have a faster convergence rate than Gibbs-Klein sampling. Next, the symmetrical distribution generated by Klein’s algorithm is taken as the proposal distribution, which offers an efficient way to perform the Metropolis sampling over high-dimensional models. Finally, the independent Metropolis-Hastings-Klein (MHK) algorithm is proposed, where the Markov chain arising from it is proved to converge to the stationary distribution exponentially fast. Furthermore, its convergence rate can be explicitly calculated in terms of the theta series, making it possible to predict the exact mixing time of the underlying Markov chain.Open Acces

    Reinforcement Learning of Speech Recognition System Based on Policy Gradient and Hypothesis Selection

    Full text link
    Speech recognition systems have achieved high recognition performance for several tasks. However, the performance of such systems is dependent on the tremendously costly development work of preparing vast amounts of task-matched transcribed speech data for supervised training. The key problem here is the cost of transcribing speech data. The cost is repeatedly required to support new languages and new tasks. Assuming broad network services for transcribing speech data for many users, a system would become more self-sufficient and more useful if it possessed the ability to learn from very light feedback from the users without annoying them. In this paper, we propose a general reinforcement learning framework for speech recognition systems based on the policy gradient method. As a particular instance of the framework, we also propose a hypothesis selection-based reinforcement learning method. The proposed framework provides a new view for several existing training and adaptation methods. The experimental results show that the proposed method improves the recognition performance compared to unsupervised adaptation.Comment: 5 pages, 6 figure

    Hardness of Bounded Distance Decoding on Lattices in ?_p Norms

    Get PDF
    Bounded Distance Decoding BDD_{p,?} is the problem of decoding a lattice when the target point is promised to be within an ? factor of the minimum distance of the lattice, in the ?_p norm. We prove that BDD_{p, ?} is NP-hard under randomized reductions where ? ? 1/2 as p ? ? (and for ? = 1/2 when p = ?), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,?}. For example, we prove that for all p ? [1,?) ? 2? and constants C > 1, ? > 0, there is no 2^((1-?)n/C)-time algorithm for BDD_{p,?} for some constant ? (which approaches 1/2 as p ? ?), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of BDD_{p,?} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of ? for which the problem is known to be NP-hard for all p > p? ? 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in ?_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018)

    Predicting Many Properties of a Quantum System from Very Few Measurements

    Get PDF
    Predicting the properties of complex, large-scale quantum systems is essential for developing quantum technologies. We present an efficient method for constructing an approximate classical description of a quantum state using very few measurements of the state. This description, called a ‘classical shadow’, can be used to predict many different properties; order log(M) measurements suffice to accurately predict M different functions of the state with high success probability. The number of measurements is independent of the system size and saturates information-theoretic lower bounds. Moreover, target properties to predict can be selected after the measurements are completed. We support our theoretical findings with extensive numerical experiments. We apply classical shadows to predict quantum fidelities, entanglement entropies, two-point correlation functions, expectation values of local observables and the energy variance of many-body local Hamiltonians. The numerical results highlight the advantages of classical shadows relative to previously known methods

    Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes

    Get PDF
    \newcommand{\NP}{\mathsf{NP}}\newcommand{\GapSVP}{\textrm{GapSVP}}We give a simple proof that the (approximate, decisional) Shortest Vector Problem is \NP-hard under a randomized reduction. Specifically, we show that for any p1p \geq 1 and any constant γ<21/p\gamma < 2^{1/p}, the γ\gamma-approximate problem in the p\ell_p norm (γ\gamma-\GapSVP_p) is not in RP\mathsf{RP} unless \NP \subseteq \mathsf{RP}. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of γ\gamma-\GapSVP_p using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known \NP-hardness results for \GapSVP_p with p<p < \infty, our reduction uses randomness. Indeed, it is a notorious open problem to prove \NP-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Trans. Inf. Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding nn-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an O(logn)O(\sqrt{\log n}) factor of Minkowski's bound. This asymptotically matches the best known distance for decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf. Theory 2022), whose work we build on with a somewhat simpler construction and analysis

    Lattice sparsification and the Approximate Closest Vector Problem

    Get PDF
    We give a deterministic algorithm for solving the (1+\eps)-approximate Closest Vector Problem (CVP) on any nn-dimensional lattice and in any near-symmetric norm in 2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the "AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a \poly(n)-space and 2O(n)2^{O(n)}-time algorithm for exact CVP in the 2\ell_2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for "sparsifying" any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for the purpose of proving hardness for the Shortest Vector Problem (SVP) under p\ell_p norms. A preliminary version of this paper appeared in the Proc. 24th Annual ACM-SIAM Symp. on Discrete Algorithms (SODA'13) (http://dx.doi.org/10.1137/1.9781611973105.78)
    corecore