739 research outputs found

    Reproducibility, accuracy and performance of the Feltor code and library on parallel computer architectures

    Get PDF
    Feltor is a modular and free scientific software package. It allows developing platform independent code that runs on a variety of parallel computer architectures ranging from laptop CPUs to multi-GPU distributed memory systems. Feltor consists of both a numerical library and a collection of application codes built on top of the library. Its main target are two- and three-dimensional drift- and gyro-fluid simulations with discontinuous Galerkin methods as the main numerical discretization technique. We observe that numerical simulations of a recently developed gyro-fluid model produce non-deterministic results in parallel computations. First, we show how we restore accuracy and bitwise reproducibility algorithmically and programmatically. In particular, we adopt an implementation of the exactly rounded dot product based on long accumulators, which avoids accuracy losses especially in parallel applications. However, reproducibility and accuracy alone fail to indicate correct simulation behaviour. In fact, in the physical model slightly different initial conditions lead to vastly different end states. This behaviour translates to its numerical representation. Pointwise convergence, even in principle, becomes impossible for long simulation times. In a second part, we explore important performance tuning considerations. We identify latency and memory bandwidth as the main performance indicators of our routines. Based on these, we propose a parallel performance model that predicts the execution time of algorithms implemented in Feltor and test our model on a selection of parallel hardware architectures. We are able to predict the execution time with a relative error of less than 25% for problem sizes between 0.1 and 1000 MB. Finally, we find that the product of latency and bandwidth gives a minimum array size per compute node to achieve a scaling efficiency above 50% (both strong and weak)

    Towards the AlexNet Moment for Homomorphic Encryption: HCNN, theFirst Homomorphic CNN on Encrypted Data with GPUs

    Get PDF
    Deep Learning as a Service (DLaaS) stands as a promising solution for cloud-based inference applications. In this setting, the cloud has a pre-learned model whereas the user has samples on which she wants to run the model. The biggest concern with DLaaS is user privacy if the input samples are sensitive data. We provide here an efficient privacy-preserving system by employing high-end technologies such as Fully Homomorphic Encryption (FHE), Convolutional Neural Networks (CNNs) and Graphics Processing Units (GPUs). FHE, with its widely-known feature of computing on encrypted data, empowers a wide range of privacy-concerned applications. This comes at high cost as it requires enormous computing power. In this paper, we show how to accelerate the performance of running CNNs on encrypted data with GPUs. We evaluated two CNNs to classify homomorphically the MNIST and CIFAR-10 datasets. Our solution achieved a sufficient security level (> 80 bit) and reasonable classification accuracy (99%) and (77.55%) for MNIST and CIFAR-10, respectively. In terms of latency, we could classify an image in 5.16 seconds and 304.43 seconds for MNIST and CIFAR-10, respectively. Our system can also classify a batch of images (> 8,000) without extra overhead

    FloatX: A C++ Library for Customized Floating-Point Arithmetic

    Full text link
    "© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Mathematical Software, {45, 4, (2019)} https://dl.acm.org/doi/10.1145/3368086"[EN] We present FloatX (Float eXtended), a C++ framework to investigate the effect of leveraging customized floating-point formats in numerical applications. FloatX formats are based on binary IEEE 754 with smaller significand and exponent bit counts specified by the user. Among other properties, FloatX facilitates an incremental transformation of the code, relies on hardware-supported floating-point types as back-end to preserve efficiency, and incurs no storage overhead. The article discusses in detail the design principles, programming interface, and datatype casting rules behind FloatX. Furthermore, it demonstrates FloatX's usage and benefits via several case studies from well-known numerical dense linear algebra libraries, such as BLAS and LAPACK; the Ginkgo library for sparse linear systems; and two neural network applications related with image processing and text recognition.This work was supported by the CICYT projects TIN2014-53495-R and TIN2017-82972-R of the MINECO and FEDER, and the EU H2020 project 732631 "OPRECOMP. Open Transprecision Computing."Flegar, G.; Scheidegger, F.; Novakovic, V.; Mariani, G.; Tomás Domínguez, AE.; Malossi, C.; Quintana-Ortí, ES. (2019). FloatX: A C++ Library for Customized Floating-Point Arithmetic. ACM Transactions on Mathematical Software. 45(4):1-23. https://doi.org/10.1145/3368086S123454Edward Anderson Zhaojun Bai L. Susan Blackford James Demmesl Jack J. Dongarra Jeremy Du Croz Sven Hammarling Anne Greenbaum Alan McKenney and Danny C. Sorensen. 1999. LAPACK Users’ Guide (3rd ed.). SIAM. Edward Anderson Zhaojun Bai L. Susan Blackford James Demmesl Jack J. Dongarra Jeremy Du Croz Sven Hammarling Anne Greenbaum Alan McKenney and Danny C. Sorensen. 1999. LAPACK Users’ Guide (3rd ed.). SIAM.Bekas, C., Curioni, A., & Fedulova, I. (2011). Low-cost data uncertainty quantification. Concurrency and Computation: Practice and Experience, 24(8), 908-920. doi:10.1002/cpe.1770Boldo, S., & Melquiond, G. (2008). Emulation of a FMA and Correctly Rounded Sums: Proved Algorithms Using Rounding to Odd. IEEE Transactions on Computers, 57(4), 462-471. doi:10.1109/tc.2007.70819Buttari, A., Dongarra, J., Langou, J., Langou, J., Luszczek, P., & Kurzak, J. (2007). Mixed Precision Iterative Refinement Techniques for the Solution of Dense Linear Systems. The International Journal of High Performance Computing Applications, 21(4), 457-466. doi:10.1177/1094342007084026Dongarra, J. J., Du Croz, J., Hammarling, S., & Duff, I. S. (1990). A set of level 3 basic linear algebra subprograms. ACM Transactions on Mathematical Software, 16(1), 1-17. doi:10.1145/77626.79170Figueroa, S. A. (1995). When is double rounding innocuous? ACM SIGNUM Newsletter, 30(3), 21-26. doi:10.1145/221332.221334Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., & Zimmermann, P. (2007). MPFR. ACM Transactions on Mathematical Software, 33(2), 13. doi:10.1145/1236463.1236468Mark Gates Piotr Luszczek Ahmad Abdelfattah Jakub Kurzak Jack Dongarra Konstantin Arturov Cris Cecka and Chip Freitag. 2017. C++ API for BLAS and LAPACK. Technical Report 2 ICL-UT-17-03. Mark Gates Piotr Luszczek Ahmad Abdelfattah Jakub Kurzak Jack Dongarra Konstantin Arturov Cris Cecka and Chip Freitag. 2017. C++ API for BLAS and LAPACK. Technical Report 2 ICL-UT-17-03.John Hauser. Accessed March 2019. Berkeley SoftFloat project home page. Retrieved from http://www.jhauser.us/arithmetic/SoftFloat.html. John Hauser. Accessed March 2019. Berkeley SoftFloat project home page. Retrieved from http://www.jhauser.us/arithmetic/SoftFloat.html.Nicholas J. Higham. 2002. Accuracy and Stability of Numerical Algorithms (2nd ed.). Society for Industrial and Applied Mathematics Philadelphia PA. Nicholas J. Higham. 2002. Accuracy and Stability of Numerical Algorithms (2nd ed.). Society for Industrial and Applied Mathematics Philadelphia PA.Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios Papaefthymiou Scott Mahlke Thomas Wenisch Jia Deng Lingjia Tang and Jason Mars. 2018. Rethinking numerical representations for deep neural networks. arXiv e-prints (Aug 2018). arXiv:1808.02513. Retrieved from https://openreview.net/forum?id&equals;BJ_MGwqlg8noteId&equals;BJ_MGwqlg. Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios Papaefthymiou Scott Mahlke Thomas Wenisch Jia Deng Lingjia Tang and Jason Mars. 2018. Rethinking numerical representations for deep neural networks. arXiv e-prints (Aug 2018). arXiv:1808.02513. Retrieved from https://openreview.net/forum?id&equals;BJ_MGwqlg8noteId&equals;BJ_MGwqlg.Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios Papaefthymiou Scott Mahlke Thomas Wenisch Jia Deng etal 2018. Rethinking numerical representations for deep neural networks. 2018. Parker Hill Babak Zamirai Shengshuo Lu Yu-Wei Chao Michael Laurenzano Mehrzad Samadi Marios Papaefthymiou Scott Mahlke Thomas Wenisch Jia Deng et al. 2018. Rethinking numerical representations for deep neural networks. 2018.IBM. 2015. Engineering and Scientific Subroutine Library. Retrieved from http://www-03.ibm.com/systems/power/software/essl/. IBM. 2015. Engineering and Scientific Subroutine Library. Retrieved from http://www-03.ibm.com/systems/power/software/essl/.IEEE. 2008. IEEE Standard for Floating-point Arithmetic. IEEE Std 754-2008 (Aug. 2008) 1--70. DOI:https://doi.org/10.1109/IEEESTD.2008.4610935 IEEE. 2008. IEEE Standard for Floating-point Arithmetic. IEEE Std 754-2008 (Aug. 2008) 1--70. DOI:https://doi.org/10.1109/IEEESTD.2008.4610935Intel. 2015. Math Kernel Library. Retrieved from https://software.intel.com/en-us/intel-mkl. Intel. 2015. Math Kernel Library. Retrieved from https://software.intel.com/en-us/intel-mkl.ISO. 2017. ISO International Standard ISO/IEC 14882:2017(E)—Programming Language C++. Retrieved from https://isocpp.org/std/the-standard. Visited June 2018. ISO. 2017. ISO International Standard ISO/IEC 14882:2017(E)—Programming Language C++. Retrieved from https://isocpp.org/std/the-standard. Visited June 2018.Lefevre, V. (2013). SIPE: Small Integer Plus Exponent. 2013 IEEE 21st Symposium on Computer Arithmetic. doi:10.1109/arith.2013.22Liu, Z., Luo, P., Wang, X., & Tang, X. (2015). Deep Learning Face Attributes in the Wild. 2015 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2015.425Érik Martin-Dorel Guillaume Melquiond and Jean-Michel Muller. 2013. Some issues related to double rounding. BIT Num. Math. 53 4 (01 Dec. 2013) 897--924. DOI:https://doi.org/10.1007/s10543-013-0436-2 Érik Martin-Dorel Guillaume Melquiond and Jean-Michel Muller. 2013. Some issues related to double rounding. BIT Num. Math. 53 4 (01 Dec. 2013) 897--924. DOI:https://doi.org/10.1007/s10543-013-0436-2Sparsh Mittal. 2016. A survey of techniques for approximate computing. ACM Comput. Surv. 48 4 Article 62 (Mar. 2016) 33 pages. DOI:https://doi.org/10.1145/2893356 Sparsh Mittal. 2016. A survey of techniques for approximate computing. ACM Comput. Surv. 48 4 Article 62 (Mar. 2016) 33 pages. DOI:https://doi.org/10.1145/2893356NVIDIA. 2016. cuBLAS. Retrieved from https://developer.nvidia.com/cublas. NVIDIA. 2016. cuBLAS. Retrieved from https://developer.nvidia.com/cublas.D. O’Leary. 2006. Matrix factorization for information retrieval. Lecture notes for a course on Advanced Numerical Analysis. University of Maryland. Retrieved from https://www.cs.umd.edu/users/oleary/a600/yahoo.pdf. D. O’Leary. 2006. Matrix factorization for information retrieval. Lecture notes for a course on Advanced Numerical Analysis. University of Maryland. Retrieved from https://www.cs.umd.edu/users/oleary/a600/yahoo.pdf.OpenBLAS. 2015. Retrieved from http://www.openblas.net. OpenBLAS. 2015. Retrieved from http://www.openblas.net.Palmer, T. (2015). Modelling: Build imprecise supercomputers. Nature, 526(7571), 32-33. doi:10.1038/526032aAlec Radford Luke Metz and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. Retrieved from Arxiv Preprint Arxiv:1511.06434 (2015). Alec Radford Luke Metz and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. Retrieved from Arxiv Preprint Arxiv:1511.06434 (2015).Rubio-González, C., Nguyen, C., Nguyen, H. D., Demmel, J., Kahan, W., Sen, K., … Hough, D. (2013). Precimonious. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC ’13. doi:10.1145/2503210.2503296Rump, S. M. (2017). IEEE754 Precision- k base-β Arithmetic Inherited by Precision- m Base-β Arithmetic for k < m. ACM Transactions on Mathematical Software, 43(3), 1-15. doi:10.1145/2785965Rybalkin, V., Wehn, N., Yousefi, M. R., & Stricker, D. (2017). Hardware architecture of Bidirectional Long Short-Term Memory Neural Network for Optical Character Recognition. Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. doi:10.23919/date.2017.7927210Giuseppe Tagliavini Stefan Mach Davide Rossi Andrea Marongiu and Luca Benini. 2017. A transprecision floating-point platform for ultra-low power computing. Retrieved from Arxiv Preprint Arxiv:1711.10374 (2017). Giuseppe Tagliavini Stefan Mach Davide Rossi Andrea Marongiu and Luca Benini. 2017. A transprecision floating-point platform for ultra-low power computing. Retrieved from Arxiv Preprint Arxiv:1711.10374 (2017).Tobias Thornes. (2016). Can reducing precision improve accuracy in weather and climate models? Weather 71 6 (02 June 2016) 147--150. DOI:https://doi.org/10.1002/wea.2732 Tobias Thornes. (2016). Can reducing precision improve accuracy in weather and climate models? Weather 71 6 (02 June 2016) 147--150. DOI:https://doi.org/10.1002/wea.2732Van Zee, F. G., & van de Geijn, R. A. (2015). BLIS: A Framework for Rapidly Instantiating BLAS Functionality. ACM Transactions on Mathematical Software, 41(3), 1-33. doi:10.1145/2764454Todd L. Veldhuizen. 2003. C++ Templates are Turing Complete. Technical Report. Todd L. Veldhuizen. 2003. C++ Templates are Turing Complete. Technical Report.Qiang Xu Todd Mytkowicz and Nam Sung Kim. 2015. Approximate computing: A survey. IEEE Des. Test 33 (01 2015) 8--22. Qiang Xu Todd Mytkowicz and Nam Sung Kim. 2015. Approximate computing: A survey. IEEE Des. Test 33 (01 2015) 8--22.Ziv, A. (1991). Fast evaluation of elementary mathematical functions with correctly rounded last bit. ACM Transactions on Mathematical Software, 17(3), 410-423. doi:10.1145/114697.11681

    First steps towards more numerical reproducibility

    Get PDF
    International audienceQuestions whether numerical simulation is reproducible or not have been reported in several sensitive applications. Numerical reproducibility failure mainly comes from the finite precision of computer arithmetic. Results of floating-point computation depends on the computer arithmetic precision and on the order of arithmetic operations. Massive parallel HPC which merges, for instance, many-core CPU and GPU, clearly modifies these two parameters even from run to run on a given computing platform. How to trust such computed results? This paper presents how three classic approaches in computer arithmetic may provide some first steps towards more numerical reproducibility

    Proceedings of the 7th Conference on Real Numbers and Computers (RNC'7)

    Get PDF
    These are the proceedings of RNC7
    • …
    corecore