4 research outputs found

    Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network

    Get PDF
    Fully homomorphic encryption (FHE) is one of the prospective tools for privacypreserving machine learning (PPML), and several PPML models have been proposed based on various FHE schemes and approaches. Although the FHE schemes are known as suitable tools to implement PPML models, previous PPML models on FHE encrypted data are limited to only simple and non-standard types of machine learning models. These non-standard machine learning models are not proven efficient and accurate with more practical and advanced datasets. Previous PPML schemes replace non-arithmetic activation functions with simple arithmetic functions instead of adopting approximation methods and do not use bootstrapping, which enables continuous homomorphic evaluations. Thus, they could not use standard activation functions and could not employ a large number of layers. The maximum classification accuracy of the existing PPML model with the FHE for the CIFAR-10 dataset was only 77% until now. In this work, we firstly implement the standard ResNet-20 model with the RNS-CKKS FHE with bootstrapping and verify the implemented model with the CIFAR-10 dataset and the plaintext model parameters. Instead of replacing the non-arithmetic functions with the simple arithmetic function, we use state-of-the-art approximation methods to evaluate these non-arithmetic functions, such as the ReLU, with sufficient precision [1]. Further, for the first time, we use the bootstrapping technique of the RNS-CKKS scheme in the proposed model, which enables us to evaluate a deep learning model on the encrypted data. We numerically verify that the proposed model with the CIFAR-10 dataset shows 98.67% identical results to the original ResNet-20 model with non-encrypted data. The classification accuracy of the proposed model is 90.67%, which is pretty close to that of the original ResNet-20 CNN model...Comment: 12 pages, 4 figure

    Efficiency and Productivity of Local Educational Administration in Korea Using the Malmquist Productivity Index

    No full text
    As local governments around the world struggle to finance and deliver quality education under fiscal constraints, pressures mount to increase efficiency and productivity in order to obtain more output from the same or fewer resources. Focusing on the case of Korea, this study investigates the productivity of outputs in local offices of education (OEs) through the analysis of personnel and financial factors by year (2012–2016). Overall, the results indicate the efficient operation of the OEs in Korea. The Malmquist productivity index (MPI) mean decreased from 2012 to 2014, increased from 2014 to 2015, and decreased from 2015 to 2016. The rate of chronological change in each OE’s MPI showed the same pattern of change in the distribution ratio of school expenditures. Finally, the MPI had the same pattern as the Technical Change Index. Policy implications are provided

    Efficiency and Productivity of Local Educational Administration in Korea Using the Malmquist Productivity Index

    No full text
    As local governments around the world struggle to finance and deliver quality education under fiscal constraints, pressures mount to increase efficiency and productivity in order to obtain more output from the same or fewer resources. Focusing on the case of Korea, this study investigates the productivity of outputs in local offices of education (OEs) through the analysis of personnel and financial factors by year (2012–2016). Overall, the results indicate the efficient operation of the OEs in Korea. The Malmquist productivity index (MPI) mean decreased from 2012 to 2014, increased from 2014 to 2015, and decreased from 2015 to 2016. The rate of chronological change in each OE’s MPI showed the same pattern of change in the distribution ratio of school expenditures. Finally, the MPI had the same pattern as the Technical Change Index. Policy implications are provided

    SCE-LSTM: Sparse Critical Event-Driven LSTM Model with Selective Memorization for Agricultural Time-Series Prediction

    No full text
    In the domain of agricultural product sales and consumption forecasting, the presence of infrequent yet impactful events such as livestock epidemics and mass media influences poses substantial challenges. These rare occurrences, termed Sparse Critical Events (SCEs), often lead to predictions converging towards average values due to their omission from input candidate vectors. To address this issue, we introduce a modified Long Short-Term Memory (LSTM) model designed to selectively attend to and memorize critical events, emulating the human memory’s ability to retain crucial information. In contrast to the conventional LSTM model, which struggles with learning sparse critical event sequences due to its handling of forget gates and input vectors within the cell state, our proposed approach identifies and learns from sparse critical event sequences during data training. This proposed method, referred to as sparse critical event-driven LSTM (SCE-LSTM), is applied to predict purchase quantities of agricultural and livestock products using sharp-changing agricultural time-series data. For these predictions, we collected structured and unstructured data spanning the years 2010 to 2017 and developed the SCE-LSTM prediction model. Our model forecasts monetary expenditures for pork purchases over a one-month horizon. Notably, our results demonstrate that SCE-LSTM provides the closest predictions to actual daily pork purchase expenditures and exhibits the lowest error rates when compared to other prediction models. SCE-LSTM emerges as a promising solution to enhance agricultural product sales and consumption forecasts, particularly in the presence of rare critical events. Its superior performance and accuracy, as evidenced by our findings, underscore its potential significance in this domain
    corecore