500 research outputs found
Logistic regression model training based on the approximate homomorphic encryption
Background: Security concerns have been raised since big data became a prominent tool in data analysis. For instance, many machine learning algorithms aim to generate prediction models using training data which contain sensitive information about individuals. Cryptography community is considering secure computation as a solution for privacy protection. In particular, practical requirements have triggered research on the efficiency of cryptographic primitives. Methods: This paper presents a method to train a logistic regression model without information leakage. We apply the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database. In addition, we adapt Nesterov's accelerated gradient method to reduce the number of iterations as well as the computational cost while maintaining the quality of an output classifier. Results: Our method shows a state-of-the-art performance of homomorphic encryption system in a real-world application. The submission based on this work was selected as the best solution of Track 3 at iDASH privacy and security competition 2017. For example, it took about six minutes to obtain a logistic regression model given the dataset consisting of 1579 samples, each of which has 18 features with a binary outcome variable. Conclusions: We present a practical solution for outsourcing analysis tools such as logistic regression analysis while preserving the data confidentiality
Encrypted statistical machine learning: new privacy preserving methods
We present two new statistical machine learning methods designed to learn on
fully homomorphic encrypted (FHE) data. The introduction of FHE schemes
following Gentry (2009) opens up the prospect of privacy preserving statistical
machine learning analysis and modelling of encrypted data without compromising
security constraints. We propose tailored algorithms for applying extremely
random forests, involving a new cryptographic stochastic fraction estimator,
and na\"{i}ve Bayes, involving a semi-parametric model for the class decision
boundary, and show how they can be used to learn and predict from encrypted
data. We demonstrate that these techniques perform competitively on a variety
of classification data sets and provide detailed information about the
computational practicalities of these and other FHE methods.Comment: 39 page
λνμνΈ μ¬λΆν κΈ°λ²μ κ΄ν μ°κ΅¬
νμλ
Όλ¬Έ (λ°μ¬)-- μμΈλνκ΅ λνμ : μμ°κ³Όνλν μ리과νλΆ, 2019. 2. μ²μ ν¬.2009λ
Gentryμ μν΄μ μμ λνμνΈκ° μ²μ μ€κ³λ μ΄νλ‘ μ΅μ νμ κ³ μνλ₯Ό μν΄μ λ€μν κΈ°λ²λ€κ³Ό μ€ν΄λ€μ΄ μ€κ³λμ΄ μλ€. νμ§λ§ λνμνΈμ μ°μ°νμλ₯Ό 무μ νμΌλ‘ λ리기 μν΄μ νμμ μΈ μ¬λΆν
κΈ°λ²μ ν¨μ¨μ± λ¬Έμ λ‘ μ€μ μμ©μ μ μ©νκΈ°μλ λΆμ ν©νλ€λ νκ°λ₯Ό λ§μ΄ λ°μμλ€. λ³Έ λ
Όλ¬Έμμλ μ¬λΆν
κΈ°λ²μ κ³ μνλ₯Ό μν λ€μν κΈ°λ²μ μ μνκ³ μ΄λ₯Ό μ€μ λ‘ μμ©λΆμΌμ μ μ©νμλ€.
λ³Έ λ
Όλ¬Έμμλ λνμ μΈ λνμνΈ μ€ν΄λ€μ λν μ¬λΆν
κΈ°λ²μ λν μ°κ΅¬λ₯Ό μννμλλ°, 첫 λ²μ§Έλ‘λ Microsoft Researchμ IMBμμ λ§λ λνμνΈ λΌμ΄λΈλ¬λ¦¬μΈ SEALκ³Ό HElibμ μ μ©κ°λ₯ν μ¬λΆν
κΈ°λ²μ λν μ°κ΅¬λ₯Ό μννμλ€. ν΄λΉ μ¬λΆν
κΈ°λ²μμ ν΅μ¬μ μ΄ κ³Όμ μ μνΈνλ μνμμ 볡νΈν ν¨μλ₯Ό κ³μ°νλ λΆλΆμ΄λ€. μνΈλ μνμμ μ΅νμ λΉνΈλ₯Ό μΆμΆνλ μλ‘μ΄ λ°©λ²μ μ μνμ¬ μ¬λΆν
κ³Όμ μμ μλͺ¨λλ κ³μ°λκ³Ό ννλλ λ€νμμ μ°¨μλ₯Ό μ€μ΄λλ°μ μ±κ³΅νμλ€.
λ λ²μ§Έλ‘λ, λΉκ΅μ μ΅κ·Όμ κ°λ°λ κ·Όμ¬κ³μ° λνμνΈμΈ HEAAN μ€ν΄μ μ¬λΆν
κΈ°λ²μ κ°μ νλ μ°κ΅¬λ₯Ό μννμλ€. 2018λ
μ μΌκ°ν¨μλ₯Ό μ΄μ©ν κ·Όμ¬λ²μ ν΅ν΄μ μ²μ ν΄λΉ μ€ν΄μ λν μ¬λΆν
κΈ°λ²μ΄ μ μλμλλ°, λ§μ λ°μ΄ν°λ₯Ό λ΄κ³ μλ μνΈλ¬Έμ λν΄μλ μ μ²λ¦¬, νμ²λ¦¬ κ³Όμ μ΄ κ³μ°λμ λλΆλΆμ μ°¨μ§νλ λ¬Έμ κ° μμλ€. ν΄λΉ κ³Όμ λ€μ μ¬λ¬ λ¨κ³λ‘ μ¬κ·μ μΈ ν¨μλ€λ‘ νννμ¬ κ³μ°λμ΄ λ°μ΄ν° μ¬μ΄μ¦μ λν΄μ λ‘κ·Έμ μΌλ‘ μ€μ΄λ κ²μ μ±κ³΅νμλ€. μΆκ°λ‘, λ€λ₯Έ μ€ν΄λ€μ λΉν΄μ λ§μ΄ μ¬μ©λμ§λ μμ§λ§, μ μκΈ°λ° λνμνΈλ€μ λν΄μλ μ¬λΆν
κΈ°λ²μ κ°μ νλ μ°κ΅¬λ₯Ό μννμκ³ κ·Έ κ²°κ³Ό κ³μ°λμ λ‘κ·Έμ μΌλ‘ μ€μ΄λ κ²μ μ±κ³΅νμλ€.
λ§μ§λ§μΌλ‘, μ¬λΆν
κΈ°λ²μ νμ©μ±κ³Ό μ¬μ© κ°λ₯μ±μ 보μ΄κΈ° μν΄ μ€μ λ°μ΄ν° 보μμ νμλ‘ νλ κΈ°κ³νμ΅ λΆμΌμ μ μ©ν΄λ³΄μλ€. μ€μ λ‘ 400,000건μ κΈμ΅ λ°μ΄ν°λ₯Ό μ΄μ©ν νκ·λΆμμ μνΈνλ λ°μ΄ν°λ₯Ό μ΄μ©ν΄μ μννμλ€. κ·Έ κ²°κ³Ό μ½ 16μκ° μμ 80\% μ΄μμ μ νλμ 0.8 μ λμ AUROC κ°μ κ°μ§λ μ μλ―Έν λΆμ λͺ¨λΈμ μ»μ μ μμλ€.After Gentry's blueprint on homomorphic encryption (HE) scheme, various efficient schemes have been suggested. For unlimited number of operations between encrypted data, the bootstrapping process is necessary. There are only few works on bootstrapping procedure because of the complexity and inefficiency of bootstrapping. In this paper, we propose various method and techniques for improved bootstrapping algorithm, and we apply it to logistic regression on large scale encrypted data.
The bootstrapping process depends on based homomorphic encryption scheme. For various schemes such as BGV, BFV, HEAAN, and integer-based scheme, we improve bootstrapping algorithm. First, we improved bootstrapping for BGV (HElib) and FV (SEAL) schemes which is implemented by Microsoft Research and IMB respectively. The key process for bootstrapping in those two scheme is extracting lower digits of plaintext in encrypted state. We suggest new polynomial that removes lowest digit of input, and we apply it to bootstrapping with previous method. As a result, both the complexity and the consumed depth are reduced. Second, bootstrapping for multiple data needs homomorphic linear transformation. The complexity of this part is O(n) for number of slot n, and this part becomes a bottleneck when we use large n. We use the structure of linear transformation which is used in bootstrapping, and we decompose the matrix which is corresponding to the transformation. By applying recursive strategy, we reduce the complexity to O(log n). Furthermore, we suggest new bootstrapping method for integer-based HE schemes which are based on approximate greatest common divisor problem. By using digit extraction instead of previous bit-wise approach, the complexity of bootstrapping algorithm reduced from O(poly(lambda)) to O(log^2(lambda)). Our implementation for this process shows 6 seconds which was about 3 minutes.
To show that bootstrapping can be used for practical application, we implement logistic regression on encrypted data with large scale. Our target data has 400,000 samples, and each sample has 200 features. Because of the size of the data, direct application of homomorphic encryption scheme is almost impossible. Therefore, we decide the method for encryption to maximize the effect of multi-threading and SIMD operations in HE scheme. As a result, our homomorphic logistic regression takes about 16 hours for the target data. The output model has 0.8 AUROC with about 80% accuracy. Another experiment on MNIST dataset shows correctness of our implementation and method.Abstract
1 Introduction
1.1 Homomorphic Encryption
1.2 Machine Learning on Encrypted Data
1.3 List of Papers
2 Background
2.1 Notation
2.2 Homomorphic Encryption
2.3 Ring Learning with Errors
2.4 Approximate GCD
3 Lower Digit Removal and Improved Bootstrapping
3.1 Basis of BGV and BFV scheme
3.2 Improved Digit Extraction Algorithm
3.3 Bootstrapping for BGV and BFV Scheme
3.3.1 Our modications
3.4 Slim Bootstrapping Algorithm
3.5 Implementation Result
4 Faster Homomorphic DFT and Improved Bootstrapping
4.1 Basis of HEAAN scheme
4.2 Homomorphic DFT
4.2.1 Previous Approach
4.2.2 Our method
4.2.3 Hybrid method
4.2.4 Implementation Result
4.3 Improved Bootstrapping for HEAAN
4.3.1 Linear Transformation in Bootstrapping
4.3.2 Improved CoeToSlot and SlotToCoe
4.3.3 Implementation Result
5 Faster Bootstrapping for FHE over the integers
5.1 Basis of FHE over the integers
5.2 Decryption Function via Digit Extraction
5.2.1 Squashed Decryption Function
5.2.2 Digit extraction Technique
5.2.3 Homomorphic Digit Extraction in FHE over the integers
5.3 Bootstrapping for FHE over the integers
5.3.1 CLT scheme with M Z_t
5.3.2 Homomorphic Operations with M Z_t^a
5.3.3 Homomorphic Digit Extraction for CLT scheme
5.3.4 Our Method on the CLT scheme
5.3.5 Analysis of Proposed Bootstrapping Method
5.4 Implementation Result
6 Logistic Regression on Large Encrypted Data
6.1 Basis of Logistic Regression
6.2 Logistic Regression on Encrypted Data
6.2.1 HE-friendly Logistic Regression Algorithm
6.2.2 HE-Optimized Logistic Regression Algorithm
6.2.3 Further Optimization
6.3 Evaluation
6.3.1 Logistic Regression on Encrypted Financial Dataset
6.3.2 Logistic Regression on Encrypted MNIST Dataset
6.3.3 Discussion
7 Conclusions
Abstract (in Korean)Docto
Homomorphic Encryption for Machine Learning in Medicine and Bioinformatics
Machine learning techniques are an excellent tool for the medical community to analyzing large amounts of medical and genomic data. On the other hand, ethical concerns and privacy regulations prevent the free sharing of this data. Encryption methods such as fully homomorphic encryption (FHE) provide a method evaluate over encrypted data. Using FHE, machine learning models such as deep learning, decision trees, and naive Bayes have been implemented for private prediction using medical data. FHE has also been shown to enable secure genomic algorithms, such as paternity testing, and secure application of genome-wide association studies. This survey provides an overview of fully homomorphic encryption and its applications in medicine and bioinformatics. The high-level concepts behind FHE and its history are introduced. Details on current open-source implementations are provided, as is the state of FHE for privacy-preserving techniques in machine learning and bioinformatics and future growth opportunities for FHE
Privacy-Preserving CNN Training with Transfer Learning
Privacy-preserving nerual network inference has been well studied while
homomorphic CNN training still remains an open challenging task. In this paper,
we present a practical solution to implement privacy-preserving CNN training
based on mere Homomorphic Encryption (HE) technique. To our best knowledge,
this is the first attempt successfully to crack this nut and no work ever
before has achieved this goal. Several techniques combine to make it done: (1)
with transfer learning, privacy-preserving CNN training can be reduced to
homomorphic neural network training, or even multiclass logistic regression
(MLR) training; (2) via a faster gradient variant called , an enhanced gradient method for MLR with a state-of-the-art
performance in converge speed is applied in this work to achieve high
performance; (3) we employ the thought of transformation in mathematics to
transform approximating Softmax function in encryption domain to the
well-studied approximation of Sigmoid function. A new type of loss function is
alongside been developed to complement this change; and (4) we use a simple but
flexible matrix-encoding method named to manage the
data flow in the ciphertexts, which is the key factor to complete the whole
homomorphic CNN training. The complete, runnable C++ code to implement our work
can be found at: https://github.com/petitioner/HE.CNNtraining.
We select as our pre-train model for using
transfer learning. We use the first 128 MNIST training images as training data
and the whole MNIST testing dataset as the testing data. The client only needs
to upload 6 ciphertexts to the cloud and it takes mins to perform 2
iterations on a cloud with 64 vCPUs, resulting in a precision of .Comment: In this work, we initiated to implement privacy-persevering CNN
training based on mere HE techniques by presenting a faster HE-friendly
algorith
- β¦