8 research outputs found

    Multi-dimensional Packing for HEAAN for Approximate Matrix Arithmetics

    Get PDF
    HEAAN is a homomorphic encryption (HE) scheme for approximate arithmetics. Its vector packing technique proved its potential in cryptographic applications requiring approximate computations, including data analysis and machine learning. In this paper, we propose MHEAAN - a generalization of HEAAN to the case of a tensor structure of plaintext slots. Our design takes advantage of the HEAAN scheme, that the precision losses during the evaluation are limited by the depth of the circuit, and it exceeds no more than one bit compared to unencrypted approximate arithmetics, such as floating point operations. Due to the multi-dimensional structure of plaintext slots along with rotations in various dimensions, MHEAAN is a more natural choice for applications involving matrices and tensors. We provide a concrete two-dimensional construction and show the efficiency of our scheme on several matrix operations, such as matrix multiplication, matrix transposition, and inverse. As an application, we implement the non-interactive Deep Neural Network (DNN) classification algorithm on encrypted data and encrypted model. Due to our efficient bootstrapping, the implementation can be easily extended to DNN structure with an arbitrary number of hidden layer

    A New Framework for Fast Homomorphic Matrix Multiplication

    Get PDF
    Homomorphic Encryption (HE) is one of the mainstream cryptographic tools used to enable secure outsourced computation. A typical task is secure matrix computation. Popular HE schemes are all based on the problem of Ring Learning with Errors (RLWE), where the messages are encrypted in a ring. In general, the ring dimension should be large to ensure security, which is often larger than the matrix size. Hence, exploiting the ring structure to make fast homomorphic matrix computation has been an important topic in HE. In this paper, we present a new framework for encoding a matrix and performing multiplication on encrypted matrices. The new framework requires fewer basic homomorphic operations for matrix multiplication. Suppose that the ring dimension is nn and the matrix size is d×dd\times d with d=nρd= n^{\rho}. (1) In the compact case where ρ13\rho \leq \frac{1}{3}, the multiplication of two encrypted matrices requires O~(1)\tilde{O}(1) basic homomorphic operations, which include plaintext-ciphertext multiplications, ciphertext-ciphertext multiplications, and homomorphic Galois automorphisms. (2) In the large sized case where ρ>13\rho> \frac{1}{3}, our new method requires O(d(113ρ)log27)O\big(d^{(1 - \frac{1}{3\rho})\cdot \log_2 7 }\big) basic homomorphic operations, which is better than all existing methods. In addition, the new framework reduces the communication cost, since it requires fewer key-switching keys. The number of key-switching keys is reduced from O(d)O(d) to O(logd)O(\log d)

    민감한 정보를 보호할 수 있는 프라이버시 보존 기계학습 기술 개발

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 산업공학과, 2022. 8. 이재욱.최근 인공지능의 성공에는 여러 가지 요인이 있으나, 새로운 알고리즘의 개발과 정제된 데이터 양의 기하급수적인 증가로 인한 영향이 크다. 따라서 기계학습 모델과 데이터는 실재적 가치를 가지게 되며, 현실 세계에서 개인 또는 기업은 학습된 모델 또는 학습에 사용할 데이터를 제공함으로써 이익을 얻을 수 있다. 그러나, 데이터 또는 모델의 공유는 개인의 민감 정보를 유출함으로써 프라이버시의 침해로 이어질 수 있다는 사실이 밝혀지고 있다. 본 논문의 목표는 민감 정보를 보호할 수 있는 프라이버시 보존 기계학습 방법론을 개발하는 것이다. 이를 위해 최근 활발히 연구되고 있는 두 가지 프라이버시 보존 기술, 즉 동형 암호와 차분 프라이버시를 사용한다. 먼저, 동형 암호는 암호화된 데이터에 대해 기계학습 알고리즘을 적용 가능하게 함으로써 데이터의 프라이버시를 보호할 수 있다. 그러나 동형 암호를 활용한 연산은 기존의 연산에 비해 매우 큰 연산 시간을 요구하므로 효율적인 알고리즘을 구성하는 것이 중요하다. 효율적인 연산을 위해 우리는 두 가지 접근법을 사용한다. 첫 번째는 학습 단계에서의 연산량을 줄이는 것이다. 학습 단계에서부터 동형 암호를 적용하면 학습 데이터의 프라이버시를 함께 보호할 수 있으므로 추론 단계에서만 동형 암호를 적용하는 것에 비해 프라이버시의 범위가 넓어지지만, 그만큼 연산량이 늘어난다. 본 논문에서는 일부 가장 중요한 정보만을 암호화함으로써 학습 단계를 효율적으로 하는 방법론을 제안한다. 구체적으로, 일부 민감 변수가 암호화되어 있을 때 연산량을 매우 줄일 수 있는 릿지 회귀 알고리즘을 개발한다. 또한 개발된 알고리즘을 확장시켜 동형 암호 친화적이지 않은 파라미터 탐색 과정을 최대한 제거할 수 있는 새로운 로지스틱 회귀 알고리즘을 함께 제안한다. 효율적인 연산을 위한 두 번째 접근법은 동형 암호를 기계학습의 추론 단계에서만 사용하는 것이다. 이를 통해 시험 데이터의 직접적인 노출을 막을 수 있다. 본 논문에서는 서포트 벡터 군집화 모델에 대한 동형 암호 친화적 추론 방법을 제안한다. 동형 암호는 여러 가지 위협에 대해서 데이터와 모델 정보를 보호할 수 있으나, 학습된 모델을 통해 새로운 데이터에 대한 추론 서비스를 제공할 때 추론 결과로부터 모델과 학습 데이터를 보호하지 못한다. 연구를 통해 공격자가 자신이 가진 데이터와 그 데이터에 대한 추론 결과만을 이용하여 이용하여 모델과 학습 데이터에 대한 정보를 추출할 수 있음이 밝혀지고 있다. 예를 들어, 공격자는 특정 데이터가 학습 데이터에 포함되어 있는지 아닌지를 추론할 수 있다. 차분 프라이버시는 학습된 모델에 대한 특정 데이터 샘플의 영향을 줄임으로써 이러한 공격에 대한 방어를 보장하는 프라이버시 기술이다. 차분 프라이버시는 프라이버시의 수준을 정량적으로 표현함으로써 원하는 만큼의 프라이버시를 충족시킬 수 있지만, 프라이버시를 충족시키기 위해서는 알고리즘에 그만큼의 무작위성을 더해야 하므로 모델의 성능을 떨어뜨린다. 따라서, 본문에서는 모스 이론을 이용하여 차분 프라이버시 군집화 방법론의 프라이버시를 유지하면서도 그 성능을 끌어올리는 새로운 방법론을 제안한다. 본 논문에서 개발하는 프라이버시 보존 기계학습 방법론은 각기 다른 수준에서 프라이버시를 보호하며, 따라서 상호 보완적이다. 제안된 방법론들은 하나의 통합 시스템을 구축하여 기계학습이 개인의 민감 정보롤 보호해야 하는 여러 분야에서 더욱 널리 사용될 수 있도록 하는 기대 효과를 가진다.Recent development of artificial intelligence systems has been driven by various factors such as the development of new algorithms and the the explosive increase in the amount of available data. In the real-world scenarios, individuals or corporations benefit by providing data for training a machine learning model or the trained model. However, it has been revealed that sharing of data or the model can lead to invasion of personal privacy by leaking personal sensitive information. In this dissertation, we focus on developing privacy-preserving machine learning methods which can protect sensitive information. Homomorphic encryption can protect the privacy of data and the models because machine learning algorithms can be applied to encrypted data, but requires much larger computation time than conventional operations. For efficient computation, we take two approaches. The first is to reduce the amount of computation in the training phase. We present an efficient training algorithm by encrypting only few important information. In specific, we develop a ridge regression algorithm that greatly reduces the amount of computation when one or two sensitive variables are encrypted. Furthermore, we extend the method to apply it to classification problems by developing a new logistic regression algorithm that can maximally exclude searching of hyper-parameters that are not suitable for machine learning with homomorphic encryption. Another approach is to apply homomorphic encryption only when the trained model is used for inference, which prevents direct exposure of the test data and the model information. We propose a homomorphic-encryption-friendly algorithm for inference of support based clustering. Though homomorphic encryption can prevent various threats to data and the model information, it cannot defend against secondary attacks through inference APIs. It has been reported that an adversary can extract information about the training data only with his or her input and the corresponding output of the model. For instance, the adversary can determine whether specific data is included in the training data or not. Differential privacy is a mathematical concept which guarantees defense against those attacks by reducing the impact of specific data samples on the trained model. Differential privacy has the advantage of being able to quantitatively express the degree of privacy, but it reduces the utility of the model by adding randomness to the algorithm. Therefore, we propose a novel method which can improve the utility while maintaining the privacy of differentially private clustering algorithms by utilizing Morse theory. The privacy-preserving machine learning methods proposed in this paper can complement each other to prevent different levels of attacks. We expect that our methods can construct an integrated system and be applied to various domains where machine learning involves sensitive personal information.Chapter 1 Introduction 1 1.1 Motivation of the Dissertation 1 1.2 Aims of the Dissertation 7 1.3 Organization of the Dissertation 10 Chapter 2 Preliminaries 11 2.1 Homomorphic Encryption 11 2.2 Differential Privacy 14 Chapter 3 Efficient Homomorphic Encryption Framework for Ridge Regression 18 3.1 Problem Statement 18 3.2 Framework 22 3.3 Proposed Method 25 3.3.1 Regression with one Encrypted Sensitive Variable 25 3.3.2 Regression with two Encrypted Sensitive Variables 30 3.3.3 Adversarial Perturbation Against Attribute Inference Attack 35 3.3.4 Algorithm for Ridge Regression 36 3.3.5 Algorithm for Adversarial Perturbation 37 3.4 Experiments 40 3.4.1 Experimental Setting 40 3.4.2 Experimental Results 42 3.5 Chapter Summary 47 Chapter 4 Parameter-free Homomorphic-encryption-friendly Logistic Regression 53 4.1 Problem Statement 53 4.2 Proposed Method 56 4.2.1 Motivation 56 4.2.2 Framework 58 4.3 Theoretical Results 63 4.4 Experiments 68 4.4.1 Experimental Setting 68 4.4.2 Experimental Results 70 4.5 Chapter Summary 75 Chapter 5 Homomorphic-encryption-friendly Evaluation for Support Vector Clustering 76 5.1 Problem Statement 76 5.2 Background 78 5.2.1 CKKS scheme 78 5.2.2 SVC 80 5.3 Proposed Method 82 5.4 Experiments 86 5.4.1 Experimental Setting 86 5.4.2 Experimental Results 87 5.5 Chapter Summary 89 Chapter 6 Differentially Private Mixture of Gaussians Clustering with Morse Theory 95 6.1 Problem Statement 95 6.2 Background 98 6.2.1 Mixture of Gaussians 98 6.2.2 Morse Theory 99 6.2.3 Dynamical System Perspective 101 6.3 Proposed Method 104 6.3.1 Differentially private clustering 105 6.3.2 Transition equilibrium vectors and the weighted graph 108 6.3.3 Hierarchical merging of sub-clusters 111 6.4 Theoretical Results 112 6.5 Experiments 117 6.5.1 Experimental Setting 117 6.5.2 Experimental Results 119 6.6 Chapter Summary 122 Chapter 7 Conclusion 124 7.1 Conclusion 124 7.2 Future Direction 126 Bibliography 128 국문초록 154박

    Towards Improved Homomorphic Encryption for Privacy-Preserving Deep Learning

    Get PDF
    Mención Internacional en el título de doctorDeep Learning (DL) has supposed a remarkable transformation for many fields, heralded by some as a new technological revolution. The advent of large scale models has increased the demands for data and computing platforms, for which cloud computing has become the go-to solution. However, the permeability of DL and cloud computing are reduced in privacy-enforcing areas that deal with sensitive data. These areas imperatively call for privacy-enhancing technologies that enable responsible, ethical, and privacy-compliant use of data in potentially hostile environments. To this end, the cryptography community has addressed these concerns with what is known as Privacy-Preserving Computation Techniques (PPCTs), a set of tools that enable privacy-enhancing protocols where cleartext access to information is no longer tenable. Of these techniques, Homomorphic Encryption (HE) stands out for its ability to perform operations over encrypted data without compromising data confidentiality or privacy. However, despite its promise, HE is still a relatively nascent solution with efficiency and usability limitations. Improving the efficiency of HE has been a longstanding challenge in the field of cryptography, and with improvements, the complexity of the techniques has increased, especially for non-experts. In this thesis, we address the problem of the complexity of HE when applied to DL. We begin by systematizing existing knowledge in the field through an in-depth analysis of state-of-the-art for privacy-preserving deep learning, identifying key trends, research gaps, and issues associated with current approaches. One such identified gap lies in the necessity for using vectorized algorithms with Packed Homomorphic Encryption (PaHE), a state-of-the-art technique to reduce the overhead of HE in complex areas. This thesis comprehensively analyzes existing algorithms and proposes new ones for using DL with PaHE, presenting a formal analysis and usage guidelines for their implementation. Parameter selection of HE schemes is another recurring challenge in the literature, given that it plays a critical role in determining not only the security of the instantiation but also the precision, performance, and degree of security of the scheme. To address this challenge, this thesis proposes a novel system combining fuzzy logic with linear programming tasks to produce secure parametrizations based on high-level user input arguments without requiring low-level knowledge of the underlying primitives. Finally, this thesis describes HEFactory, a symbolic execution compiler designed to streamline the process of producing HE code and integrating it with Python. HEFactory implements the previous proposals presented in this thesis in an easy-to-use tool. It provides a unique architecture that layers the challenges associated with HE and produces simplified operations interpretable by low-level HE libraries. HEFactory significantly reduces the overall complexity to code DL applications using HE, resulting in an 80% length reduction from expert-written code while maintaining equivalent accuracy and efficiency.El aprendizaje profundo ha supuesto una notable transformación para muchos campos que algunos han calificado como una nueva revolución tecnológica. La aparición de modelos masivos ha aumentado la demanda de datos y plataformas informáticas, para lo cual, la computación en la nube se ha convertido en la solución a la que recurrir. Sin embargo, la permeabilidad del aprendizaje profundo y la computación en la nube se reduce en los ámbitos de la privacidad que manejan con datos sensibles. Estas áreas exigen imperativamente el uso de tecnologías de mejora de la privacidad que permitan un uso responsable, ético y respetuoso con la privacidad de los datos en entornos potencialmente hostiles. Con este fin, la comunidad criptográfica ha abordado estas preocupaciones con las denominadas técnicas de la preservación de la privacidad en el cómputo, un conjunto de herramientas que permiten protocolos de mejora de la privacidad donde el acceso a la información en texto claro ya no es sostenible. Entre estas técnicas, el cifrado homomórfico destaca por su capacidad para realizar operaciones sobre datos cifrados sin comprometer la confidencialidad o privacidad de la información. Sin embargo, a pesar de lo prometedor de esta técnica, sigue siendo una solución relativamente incipiente con limitaciones de eficiencia y usabilidad. La mejora de la eficiencia del cifrado homomórfico en la criptografía ha sido todo un reto, y, con las mejoras, la complejidad de las técnicas ha aumentado, especialmente para los usuarios no expertos. En esta tesis, abordamos el problema de la complejidad del cifrado homomórfico cuando se aplica al aprendizaje profundo. Comenzamos sistematizando el conocimiento existente en el campo a través de un análisis exhaustivo del estado del arte para el aprendizaje profundo que preserva la privacidad, identificando las tendencias clave, las lagunas de investigación y los problemas asociados con los enfoques actuales. Una de las lagunas identificadas radica en el uso de algoritmos vectorizados con cifrado homomórfico empaquetado, que es una técnica del estado del arte que reduce el coste del cifrado homomórfico en áreas complejas. Esta tesis analiza exhaustivamente los algoritmos existentes y propone nuevos algoritmos para el uso de aprendizaje profundo utilizando cifrado homomórfico empaquetado, presentando un análisis formal y unas pautas de uso para su implementación. La selección de parámetros de los esquemas del cifrado homomórfico es otro reto recurrente en la literatura, dado que juega un papel crítico a la hora de determinar no sólo la seguridad de la instanciación, sino también la precisión, el rendimiento y el grado de seguridad del esquema. Para abordar este reto, esta tesis propone un sistema innovador que combina la lógica difusa con tareas de programación lineal para producir parametrizaciones seguras basadas en argumentos de entrada de alto nivel sin requerir conocimientos de bajo nivel de las primitivas subyacentes. Por último, esta tesis propone HEFactory, un compilador de ejecución simbólica diseñado para agilizar el proceso de producción de código de cifrado homomórfico e integrarlo con Python. HEFactory es la culminación de las propuestas presentadas en esta tesis, proporcionando una arquitectura única que estratifica los retos asociados con el cifrado homomórfico, produciendo operaciones simplificadas que pueden ser interpretadas por bibliotecas de bajo nivel. Este enfoque permite a HEFactory reducir significativamente la longitud total del código, lo que supone una reducción del 80% en la complejidad de programación de aplicaciones de aprendizaje profundo que usan cifrado homomórfico en comparación con el código escrito por expertos, manteniendo una precisión equivalente.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidenta: María Isabel González Vasco.- Secretario: David Arroyo Guardeño.- Vocal: Antonis Michala

    ENNigma: A Framework for Private Neural Networks

    Get PDF
    The increasing concerns about data privacy and the stringent enforcement of data protection laws are placing growing pressure on organizations to secure large datasets. The challenge of ensuring data privacy becomes even more complex in the domains of Artificial Intelligence and Machine Learning due to their requirement for large amounts of data. While approaches like differential privacy and secure multi-party computation allow data to be used with some privacy guarantees, they often compromise data integrity or accessibility as a tradeoff. In contrast, when using encryption-based strategies, this is not the case. While basic encryption only protects data during transmission and storage, Homomorphic Encryption (HE) is able to preserve data privacy during its processing on a centralized server. Despite its advantages, the computational overhead HE introduces is notably challenging when integrated into Neural Networks (NNs), which are already computationally expensive. In this work, we present a framework called ENNigma, which is a Private Neural Network (PNN) that uses HE for data privacy preservation. Unlike some state-of-the-art approaches, ENNigma guarantees data security throughout every operation, maintaining this guarantee even if the server is compromised. The impact of this privacy preservation layer on the NN performance is minimal, with the only major drawback being its computational cost. Several optimizations were implemented to maximize the efficiency of ENNigma, leading to occasional computational time reduction above 50%. In the context of the Network Intrusion Detection System application domain, particularly within the sub-domain of Distributed Denial of Service attack detection, several models were developed and employed to assess ENNigma’s performance in a real-world scenario. These models demonstrated comparable performance to non-private NNs while also achiev ing the two-and-a-half-minute inference latency mark. This suggests that our framework is approaching a state where it can be effectively utilized in real-time applications. The key takeaway is that ENNigma represents a significant advancement in the field of PNN as it ensures data privacy with minimal impact on NN performance. While it is not yet ready for real-world deployment due to its computational complexity, this framework serves as a milestone toward realizing fully private and efficient NNs.As preocupações crescentes com a privacidade de dados e a implementação de leis que visam endereçar este problema, estão a pressionar as organizações para assegurar a segurança das suas bases de dados. Este desafio torna-se ainda mais complexo nos domínios da Inteligência Artificial e Machine Learning, que dependem do acesso a grandes volumes de dados para obterem bons resultados. As abordagens existentes, tal como Differential Privacy e Secure Multi-party Computation, já permitem o uso de dados com algumas garantias de privacidade. No entanto, na maioria das vezes, comprometem a integridade ou a acessibilidade aos mesmos. Por outro lado, ao usar estratégias baseadas em cifras, isso não ocorre. Ao contrário das cifras mais tradicionais, que apenas protegem os dados durante a transmissão e armazenamento, as cifras homomórficas são capazes de preservar a privacidade dos dados durante o seu processamento. Nomeadamente se o mesmo for centralizado num único servidor. Apesar das suas vantagens, o custo computacional introduzido por este tipo de cifras é bastante desafiador quando integrado em Redes Neurais que, por natureza, já são computacionalmente pesadas. Neste trabalho, apresentamos uma biblioteca chamada ENNigma, que é uma Rede Neural Privada construída usando cifras homomórficas para preservar a privacidade dos dados. Ao contrário de algumas abordagens estado-da-arte, a ENNigma garante a segurança dos dados em todas as operações, mantendo essa garantia mesmo que o servidor seja comprometido. O impacto da introdução desta camada de segurança, no desempenho da rede neural, é mínimo, sendo a sua única grande desvantagem o seu custo computacional. Foram ainda implementadas diversas otimizações para maximizar a eficiência da biblioteca apresentada, levando a reduções ocasionais no tempo computacional acima de 50%. No contexto do domínio de aplicação de Sistemas de Detecção de Intrusão em Redes de Computadores, em particular dentro do subdomínio de detecção de ataques do tipo Distributed Denial of Service, vários modelos foram desenvolvidos para avaliar o desempenho da ENNigma num cenário real. Estes modelos demonstraram desempenho comparável às redes neurais não privadas, ao mesmo tempo que alcançaram uma latência de inferência de dois minutos e meio. Isso sugere que a biblioteca apresentada está a aproximar-se de um estado em que pode ser utilizada em aplicações em tempo real. A principal conclusão é que a biblioteca ENNigma representa um avanço significativo na área das Redes Neurais Privadas, pois assegura a privacidade dos dados com um impacto mínimo no desempenho da rede neural. Embora esta ferramenta ainda não esteja pronta para utilização no mundo real, devido à sua complexidade computacional, serve como um marco importante para o desenvolvimento de redes neurais totalmente privadas e eficientes

    Applying Fully Homomorphic Encryption: Practices and Problems

    Get PDF
    Fully homomorphic encryption (FHE) has been regarded as the "holy grail" of cryptography for its versatility as a cryptographic primitive and wide range of potential applications. Since Gentry published the first theoretically feasible FHE design in 2008, there has been a lot of new discoveries and inventions in this particular field. New schemes significantly reduce the computational cost of FHE and make practical deployment within reach. As a result, FHE schemes have come off the paper and been explored and tested extensively in practice. However, FHE is made possible with many new problems and assumptions that are not yet well studied. In this thesis we present a comprehensive and intuitive overview of the current applied FHE landscape, from design to implementation, and draw attention to potential vulnerabilities both in theory and in practice. In more detail, we show how to use currently available FHE libraries for aggregation and select parameters to avoid weak FHE instances

    Privacy-aware Security Applications in the Era of Internet of Things

    Get PDF
    In this dissertation, we introduce several novel privacy-aware security applications. We split these contributions into three main categories: First, to strengthen the current authentication mechanisms, we designed two novel privacy-aware alternative complementary authentication mechanisms, Continuous Authentication (CA) and Multi-factor Authentication (MFA). Our first system is Wearable-assisted Continuous Authentication (WACA), where we used the sensor data collected from a wrist-worn device to authenticate users continuously. Then, we improved WACA by integrating a noise-tolerant template matching technique called NTT-Sec to make it privacy-aware as the collected data can be sensitive. We also designed a novel, lightweight, Privacy-aware Continuous Authentication (PACA) protocol. PACA is easily applicable to other biometric authentication mechanisms when feature vectors are represented as fixed-length real-valued vectors. In addition to CA, we also introduced a privacy-aware multi-factor authentication method, called PINTA. In PINTA, we used fuzzy hashing and homomorphic encryption mechanisms to protect the users\u27 sensitive profiles while providing privacy-preserving authentication. For the second privacy-aware contribution, we designed a multi-stage privacy attack to smart home users using the wireless network traffic generated during the communication of the devices. The attack works even on the encrypted data as it is only using the metadata of the network traffic. Moreover, we also designed a novel solution based on the generation of spoofed traffic. Finally, we introduced two privacy-aware secure data exchange mechanisms, which allow sharing the data between multiple parties (e.g., companies, hospitals) while preserving the privacy of the individual in the dataset. These mechanisms were realized with the combination of Secure Multiparty Computation (SMC) and Differential Privacy (DP) techniques. In addition, we designed a policy language, called Curie Policy Language (CPL), to handle the conflicting relationships among parties. The novel methods, attacks, and countermeasures in this dissertation were verified with theoretical analysis and extensive experiments with real devices and users. We believe that the research in this dissertation has far-reaching implications on privacy-aware alternative complementary authentication methods, smart home user privacy research, as well as the privacy-aware and secure data exchange methods

    근사 행렬연산을 위한 다변수 동형암호

    Get PDF
    학위논문(박사)--서울대학교 대학원 :자연과학대학 수리과학부,2019. 8. 천정희.혜안(Homomrphic Encryption for Arithmetics of Approximate Numbers, HEAAN) 은 근사 계산을 지원하는 동형 암호 스킴이다. 혜안의 벡터 패킹 기술은 데이터 분석 및 기계 학습 분야 등 근사적인 계산이 필요한 암호화 응용 프로그램에서 효율성을 입증하였다. 다변수 혜안(Multivariate HEAAN, MHEAAN)은 평문의 텐서 구조에 대한 HEAAN의 일반화이다. 본 설계는 연산 과정에서 줄어드는 유효 숫자의 길이가 연 산 서킷의 두께로 제한된다는 HEAAN의 장점을 그대로 가지고, 평문 상태에서의 근사 연산과 비교하였을 때에도 유효 숫자 낭비가 1비트를 넘지 않는다. 평문 벡터 의 회전 등 고차원 벡터의 다양한 구조들이 응용에 많이 쓰임에 따라, MHEAAN은 행렬 및 텐서와 관련된 응용 프로그램에서 기존 HEAAN에 비하여 보다 효율적인 결과를 낳는다. MHEAAN의 구체적인 2 차원 구조는 행렬 연산에 대한 MHEAAN 기법의 효 율성을 보여 주며, 로지스틱 회귀분석, 심 신경망 구조 및 회귀 신경망 구조와 같은 암호화 된 데이터 및 암호화 된 모델에 대한 여러 기계 학습 알고리즘에 적용될 수 있다. 또한 효율적인 재부팅 구현을 통하여, 이는 임의의 로지스틱 회귀 분석 등의 다양한 응용 분야에 쉽게 활용될 수 있다.Homomorphic Encryption for Arithmetics of Approximate Numbers (HEAAN) is a homomorphic encryption (HE) scheme for approximate arithmetics introduced by Cheon et.al. [CKKS17]. Its vector packing technique proved its potential in cryptographic applications requiring approximate computations, including data analysis and machine learning. Multivariate Homomorphic Encryption for Approximate Matrix Arithmetics (MHEAAN) is a generalization of HEAAN to the case of a tensor structure of plaintext slots. Our design takes advantage of the HEAAN scheme, that the precision losses during the evaluation are limited by the depth of the circuit, and it exceeds no more than one bit compared to unencrypted approximate arithmetics, such as floating-point operations. Due to the multi-dimensional structure of plaintext slots along with rotations in various dimensions, MHEAAN is a more natural choice for applications involving matrices and tensors. i The concrete two-dimensional construction shows the efficiency of the MHEAAN scheme on matrix operations and was applied to several Machine Learning algorithms on encrypted data and encrypted model such as Logistic Regression (LR) training algorithm, Deep Neural Network (DNN) and Recurrent Neural Network (RNN) classification algorithms. With the efficient bootstrapping, the implementation can be easily be scaled to the case of arbitrary LR, DNN or RNN structures.Abstract 1 Introduction 1.1 MultidimensionalVariantofHEAAN 1.2 ApplicationsToMachineLearning 1.3 ListOfPapers 2 Background Theory 2.1 BasicNotations 2.2 MachineLearningAlgorithms LogisticRegression 2.2.2 DeepLearning 2.3 The Cyclotomic Ring and Canonical Embedding 2.4 m-RLWEProblem 2.5 HEAANScheme 2.5.1 BootstrappingforHEAAN 3 MHEAAN Scheme 3.1 MHEAANScheme 3.1.1 StructureofMHEAAN 3.1.2 ConcreteConstruction 3.2 BootstrappingforMHEAAN 3.3 Homomorphic Evaluations of Matrix Operations 3.3.1 MatrixbyVectorMultiplication 3.3.2 MatrixMultiplication 3.3.3 MatrixTransposition 3.3.4 MatrixInverse 4 Applications 4.1 Sigmoid&TanhApproximations 4.2 HomomorphicLRTrainingPhase 4.2.1 DatabaseEncoding 4.2.2 Homomorphic Evaluation of the GD 4.2.3 HomomorphicEvaluationofNLGD 4.3 HomomorphicDNNClassification 4.4 HomomorphicRNNClassification 5 Implementation Results 5.1 EvaluationofNLGDTraining 5.2 EvaluationofDNNClassification 5.3 EvaluationofRNNClassification 6 Conclusions A Proofs Abstract (in Korean) Acknowledgement (in Korean)Docto
    corecore