8 research outputs found
Intel HEXL: Accelerating Homomorphic Encryption with Intel AVX512-IFMA52
Modern implementations of homomorphic encryption (HE) rely heavily on
polynomial arithmetic over a finite field. This is particularly true of the
CKKS, BFV, and BGV HE schemes. Two of the biggest performance bottlenecks in HE
primitives and applications are polynomial modular multiplication and the
forward and inverse number-theoretic transform (NTT). Here, we introduce Intel
Homomorphic Encryption Acceleration Library (Intel HEXL), a C++ library which
provides optimized implementations of polynomial arithmetic for Intel
processors. Intel HEXL takes advantage of the recent Intel Advanced Vector
Extensions 512 (Intel AVX512) instruction set to provide state-of-the-art
implementations of the NTT and modular multiplication. On the forward and
inverse NTT, Intel HEXL provides up to 7.2x and 6.7x speedup, respectively,
over a native C++ implementation. Intel HEXL also provides up to 6.0x speedup
on the element-wise vector-vector modular multiplication, and 1.7x speedup on
the element-wise vector-scalar modular multiplication. Intel HEXL is available
open-source at https://github.com/intel/hexl under the Apache 2.0 license and
has been adopted by the Microsoft SEAL and PALISADE homomorphic encryption
libraries
nGraph-HE: A Graph Compiler for Deep Learning on Homomorphically Encrypted Data
Homomorphic encryption (HE)---the ability to perform computation on encrypted data---is an attractive remedy to increasing concerns about data privacy in deep learning (DL). However, building DL models that operate on ciphertext is currently labor-intensive and requires simultaneous expertise in DL, cryptography, and software engineering. DL frameworks and recent advances in graph compilers have greatly accelerated the training and deployment of DL models to various computing platforms. We introduce nGraph-HE, an extension of nGraph, Intel\u27s DL graph compiler, which enables deployment of trained models with popular frameworks such as TensorFlow while simply treating HE as another hardware target. Our graph-compiler approach enables HE-aware optimizations-- implemented at compile-time, such as constant folding and HE-SIMD packing, and at run-time, such as special value plaintext bypass. Furthermore, nGraph-HE integrates with DL frameworks such as TensorFlow, enabling data scientists to benchmark DL models with minimal overhead
MP2ML: A Mixed-Protocol Machine Learning Framework for Private Inference (Contributed Talk)
MP2ML: A Mixed-Protocol Machine Learning Framework for Private Inference (Extended Abstract)
Enabling homomorphically encrypted inference for large DNN models
The proliferation of machine learning services in the last few years has raised data privacy concerns. Homomorphic encryption (HE) enables inference using encrypted data but it incurs 100x-10,000x memory and runtime overheads. Secure deep neural network (DNN) inference using HE is currently limited by computing and memory resources, with frameworks requiring hundreds of gigabytes of DRAM to evaluate small models. To overcome these limitations, in this paper we explore the feasibility of leveraging hybrid memory systems comprised of DRAM and persistent memory. In particular, we explore the recently-released Intel Optane PMem technology and the Intel HE-Transformer nGraph to run large neural networks such as MobileNetV2 (in its largest variant) and ResNet-50 for the first time in the literature. We present an in-depth analysis of the efficiency of the executions with different hardware and software configurations. Our results conclude that DNN inference using HE incurs on friendly access patterns for this memory configuration, yielding efficient executions.We would like to thank Jesus Labarta from BSC and Steve Scargall from Intel for their insightful and productive comments.Peer ReviewedPostprint (author's final draft
Trustworthy AI Inference Systems: An Industry Research View
In this work, we provide an industry research view for approaching the
design, deployment, and operation of trustworthy Artificial Intelligence (AI)
inference systems. Such systems provide customers with timely, informed, and
customized inferences to aid their decision, while at the same time utilizing
appropriate security protection mechanisms for AI models. Additionally, such
systems should also use Privacy-Enhancing Technologies (PETs) to protect
customers' data at any time.
To approach the subject, we start by introducing trends in AI inference
systems. We continue by elaborating on the relationship between Intellectual
Property (IP) and private data protection in such systems. Regarding the
protection mechanisms, we survey the security and privacy building blocks
instrumental in designing, building, deploying, and operating private AI
inference systems. For example, we highlight opportunities and challenges in AI
systems using trusted execution environments combined with more recent advances
in cryptographic techniques to protect data in use. Finally, we outline areas
of further development that require the global collective attention of
industry, academia, and government researchers to sustain the operation of
trustworthy AI inference systems
Recommended from our members
Trustworthy AI Inference Systems: An Industry Research View
In this work, we provide an industry research view for approaching the
design, deployment, and operation of trustworthy Artificial Intelligence (AI)
inference systems. Such systems provide customers with timely, informed, and
customized inferences to aid their decision, while at the same time utilizing
appropriate security protection mechanisms for AI models. Additionally, such
systems should also use Privacy-Enhancing Technologies (PETs) to protect
customers' data at any time. To approach the subject, we start by introducing
current trends in AI inference systems. We continue by elaborating on the
relationship between Intellectual Property (IP) and private data protection in
such systems. Regarding the protection mechanisms, we survey the security and
privacy building blocks instrumental in designing, building, deploying, and
operating private AI inference systems. For example, we highlight opportunities
and challenges in AI systems using trusted execution environments combined with
more recent advances in cryptographic techniques to protect data in use.
Finally, we outline areas of further development that require the global
collective attention of industry, academia, and government researchers to
sustain the operation of trustworthy AI inference systems