14 research outputs found

    Better Lattice Quantizers Constructed from Complex Integers

    Full text link
    Real-valued lattices and complex-valued lattices are mutually convertible, thus we can take advantages of algebraic integers to defined good lattice quantizers in the real-valued domain. In this paper, we adopt complex integers to define generalized checkerboard lattices, especially Em\mathcal{E}_{m} and Em+\mathcal{E}_{m}^+ defined by Eisenstein integers. Using Em+\mathcal{E}_{m}^+, we report the best lattice quantizers in dimensions 1414, 1818, 2020, and 2222. Their product lattices with integers Z\mathbb{Z} also yield better quantizers in dimensions 1515, 1919, 2121, and 2323. The Conway-Sloane type fast decoding algorithms for Em\mathcal{E}_{m} and Em+\mathcal{E}_{m}^+ are given.Comment: 7 page

    Reversible Deep Neural Network Watermarking:Matching the Floating-point Weights

    Full text link
    Static deep neural network (DNN) watermarking embeds watermarks into the weights of DNN model by irreversible methods, but this will cause permanent damage to watermarked model and can not meet the requirements of integrity authentication. For these reasons, reversible data hiding (RDH) seems more attractive for the copyright protection of DNNs. This paper proposes a novel RDH-based static DNN watermarking method by improving the non-reversible quantization index modulation (QIM). Targeting the floating-point weights of DNNs, the idea of our RDH method is to add a scaled quantization error back to the cover object. Two schemes are designed to realize the integrity protection and legitimate authentication of DNNs. Simulation results on training loss and classification accuracy justify the superior feasibility, effectiveness and adaptability of the proposed method over histogram shifting (HS).Comment: 21 page

    Learning with Quantization: Construction, Hardness, and Applications

    Get PDF
    This paper presents a generalization of the Learning With Rounding (LWR) problem, initially introduced by Banerjee, Peikert, and Rosen, by applying the perspective of vector quantization. In LWR, noise is induced by scalar quantization. By considering a new variant termed Learning With Quantization (LWQ), we explore large-dimensional fast-decodable lattices with superior quantization properties, aiming to enhance the compression performance over scalar quantization. We identify polar lattices as exemplary structures, effectively transforming LWQ into a problem akin to Learning With Errors (LWE), whose distribution of quantization error is statistically close to discrete Gaussian. We present two applications of LWQ: Lily, a smaller ciphertext public key encryption (PKE) scheme, and quancryption, a privacy-preserving secret-key encryption scheme. Lily achieves smaller ciphertext sizes without sacrificing security, while quancryption achieves a source-ciphertext ratio larger than 11

    Lattice Codes for Lattice-Based PKE

    Get PDF
    Existing error correction mechanisms in lattice-based public key encryption (PKE) rely on either trivial modulation or its concatenation with error correction codes (ECC). This paper demonstrates that lattice coding, as a combined ECC and modulation technique, can replace trivial modulation in current lattice-based PKEs, resulting in improved error correction performance. We model the FrodoPKE protocol as a noisy point-to-point communication system, where the communication channel resembles an additive white Gaussian noise (AWGN) channel. To utilize lattice codes for this specific channel with hypercube shaping, we propose an efficient labeling function that converts binary information bits to lattice codewords and vice versa. The parameter sets of FrodoPKE are enhanced to achieve higher security levels or smaller ciphertext sizes. For instance, the proposed Frodo-1344-E8_\text{8} offers a 10-bit classical security improvement over Frodo-1344. The code for reproducing our main experiments is available at https://github.com/shx-lyu/lattice-codes-for-pke
    corecore