Learning with Quantization: Construction, Hardness, and Applications

Abstract

This paper presents a generalization of the Learning With Rounding (LWR) problem, initially introduced by Banerjee, Peikert, and Rosen, by applying the perspective of vector quantization. In LWR, noise is induced by scalar quantization. By considering a new variant termed Learning With Quantization (LWQ), we explore large-dimensional fast-decodable lattices with superior quantization properties, aiming to enhance the compression performance over scalar quantization. We identify polar lattices as exemplary structures, effectively transforming LWQ into a problem akin to Learning With Errors (LWE), whose distribution of quantization error is statistically close to discrete Gaussian. We present two applications of LWQ: Lily, a smaller ciphertext public key encryption (PKE) scheme, and quancryption, a privacy-preserving secret-key encryption scheme. Lily achieves smaller ciphertext sizes without sacrificing security, while quancryption achieves a source-ciphertext ratio larger than 11

    Similar works