8 research outputs found

    Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption

    Full text link
    Designing privacy-preserving deep learning models is a major challenge within the deep learning community. Homomorphic Encryption (HE) has emerged as one of the most promising approaches in this realm, enabling the decoupling of knowledge between the model owner and the data owner. Despite extensive research and application of this technology, primarily in convolutional neural networks, incorporating HE into transformer models has been challenging because of the difficulties in converting these models into a polynomial form. We break new ground by introducing the first polynomial transformer, providing the first demonstration of secure inference over HE with transformers. This includes a transformer architecture tailored for HE, alongside a novel method for converting operators to their polynomial equivalent. This innovation enables us to perform secure inference on LMs with WikiText-103. It also allows us to perform image classification with CIFAR-100 and Tiny-ImageNet. Our models yield results comparable to traditional methods, bridging the performance gap with transformers of similar scale and underscoring the viability of HE for state-of-the-art applications. Finally, we assess the stability of our models and conduct a series of ablations to quantify the contribution of each model component.Comment: 6 figure

    E2E near-standard and practical authenticated transciphering

    Get PDF
    Homomorphic encryption (HE) enables computation delegation to untrusted third-party while maintaining data confidentiality. Hybrid encryption (a.k.a Transciphering) allows a reduction in the number of ciphertexts and storage size, which makes HE solutions practical for a variety of modern applications. Still, modern transciphering has two main drawbacks: 1) lack of standardization or bad performance of symmetric decryption under FHE; 2) lack of input data integrity. In this paper, we discuss the concept of Authenticated Transciphering (AT), which like Authenticated Encryption (AE) provides some integrity guarantees for the transciphered data. For that, we report on the first implementations of AES-GCM decryption and Ascon decryption under CKKS. Moreover, we report and demonstrate the first end-to-end process that uses transciphering for real-world applications i.e., running deep neural network inference (ResNet50 over ImageNet) under encryption

    HeLayers: A Tile Tensors Framework for Large Neural Networks on Encrypted Data

    Full text link
    Privacy-preserving solutions enable companies to offload confidential data to third-party services while fulfilling their government regulations. To accomplish this, they leverage various cryptographic techniques such as Homomorphic Encryption (HE), which allows performing computation on encrypted data. Most HE schemes work in a SIMD fashion, and the data packing method can dramatically affect the running time and memory costs. Finding a packing method that leads to an optimal performant implementation is a hard task. We present a simple and intuitive framework that abstracts the packing decision for the user. We explain its underlying data structures and optimizer, and propose a novel algorithm for performing 2D convolution operations. We used this framework to implement an HE-friendly version of AlexNet, which runs in three minutes, several orders of magnitude faster than other state-of-the-art solutions that only use HE.Comment: 17 pages, 7 figure
    corecore