330 research outputs found

    Convolution Operators for Deep Learning Inference on the Fujitsu A64FX Processor

    Get PDF
    Ponència presentada a 2022 IEEE 34th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD) celebrat a Bordeaux, França.The convolution operator is a crucial kernel for many computer vision and signal processing applications that rely on deep learning (DL) technologies. As such, the efficient implementation of this operator has received considerable attention in the past few years for a fair range of processor architectures. In this paper, we follow the technology trend toward integrating long SIMD (single instruction, multiple data) arithmetic units into high performance multicore processors to analyse the benefits of this type of hardware acceleration for latency-constrained DL workloads. For this purpose, we implement and optimise for the Fujitsu processor A64FX, three distinct methods for the calculation of the convolution, namely, the lowering approach, a blocked variant of the direct convolution algorithm, and the Winograd minimal filtering algorithm. Our experimental results include an extensive evaluation of the parallel scalability of these three methods and a comparison of their global performance using three popular DL models and a representative dataset

    Analyzing the impact of the MPI allreduce in distributed training of convolutional neural networks

    Get PDF
    For many distributed applications, data communication poses an important bottleneck from the points of view of performance and energy consumption. As more cores are integrated per node, in general the global performance of the system increases yet eventually becomes limited by the interconnection network. This is the case for distributed data-parallel training of convolutional neural networks (CNNs), which usually proceeds on a cluster with a small to moderate number of nodes. In this paper, we analyze the performance of the Allreduce collective communication primitive, a key to the efficient data-parallel distributed training of CNNs. Our study targets the distinct realizations of this primitive in three high performance instances of Message Passing Interface (MPI), namely MPICH, OpenMPI, and IntelMPI, and employs a cluster equipped with state-of-the-art processor and network technologies. In addition, we apply the insights gained from the experimental analysis to the optimization of the TensorFlow framework when running on top of Horovod. Our study reveals that a careful selection of the most convenient MPI library and Allreduce (ARD) realization accelerates the training throughput by a factor of 1.2× compared with the default algorithm in the same MPI library, and up to 2.8× when comparing distinct MPI libraries in a number of relevant combinations of CNN model+dataset

    Efficient and portable Winograd convolutions for multi-core processors

    Get PDF
    We take a step forward towards developing high-performance codes for the convolution operator, based on the Winograd algorithm, that are easy to customise for general-purpose processor architectures. In our approach, augmenting the portability of the solution is achieved via the introduction of vector instructions from Intel SSE/AVX2/AVX512 and ARM NEON/SVE to exploit the single-instruction multiple-data capabilities of current processors as well as OpenMP pragmas to exploit multi-threaded parallelism. While this comes at the cost of sacrificing a fraction of the computational performance, our experimental results on three distinct processors, with Intel Xeon Skylake, ARM Cortex A57 and Fujitsu A64FX processors, show that the impact is affordable and still renders a Winograd-based solution that is competitive when compared with the lowering GEMM-based convolution

    A simulator to assess energy saving strategies and policies in HPC workloads

    Get PDF
    In recent years power consumption of high performance computing (HPC) clusters has become a growing problem due, e.g., to the economic cost of electricity, the emission of car- bon dioxide (with negative impact on the environment), and the generation of heat (which reduces hardware reliability). In past work, we developed EnergySaving cluster , a software package that regulates the number of active nodes in an HPC facility to match the users’ demands. In this paper, we extend this work by presenting a simulator for this tool that allows the evaluation and analysis of the benefits of applying different energy-saving strategies and policies, under realistic workloads, to different cluster configurations

    Performance modeling of the sparse matrix-vector product via convolutional neural networks

    Full text link
    [EN] Modeling the execution time of the sparse matrix-vector multiplication (SpMV) on a current CPU architecture is especially complex due to (i) irregular memory accesses; (ii) indirect memory referencing; and (iii) low arithmetic intensity. While analytical models may yield accurate estimates for the total number of cache hits/misses, they often fail to predict accurately the total execution time. In this paper, we depart from the analytic approach to instead leverage convolutional neural networks (CNNs) in order to provide an effective estimation of the performance of the SpMV operation. For this purpose, we present a high-level abstraction of the sparsity pattern of the problem matrix and propose a blockwise strategy to feed the CNN models by blocks of nonzero elements. The experimental evaluation on a representative subset of the matrices from the SuiteSparse Matrix collection demonstrates the robustness of the CNN models for predicting the SpMV performance on an Intel Haswell core. Furthermore, we show how to generalize the network models to other target architectures to estimate the performance of SpMV on an ARM A57 coreThis work was supported by project TIN2017-82972-R from the MINECO, Spain. Manuel F. Dolz was also supported by the Plan GenT project CDEIGENT/2018/014 from the Generalitat Valenciana, Spain. Maria Barreda was also supported by the POSDOC-A/2017/11 project from the Universitat Jaume IBarreda, M.; Dolz, MF.; Castaño Alvarez, MA.; Alonso-Jordá, P.; Quintana-Orti, ES. (2020). Performance modeling of the sparse matrix-vector product via convolutional neural networks. The Journal of Supercomputing (Online). 76(11):8883-8900. https://doi.org/10.1007/s11227-020-03186-1S888389007611Abdelfattah A, Ltaief H, Keyes D (2015) High performance multi-GPU SpMV for multi-component PDE-based applications. In: Träff JL, Hunold S, Versaci F (eds) Euro-Par 2015: parallel processing. Springer, Berlin, pp 601–612Schiesser WE (2014) Computational mathematics in engineering and applied science: ODEs, DAEs, and PDEs. CRC Press, Boca RatonVuduc R, Demmel JW, Yelick KA (2005) OSKI: a library of automatically tuned sparse matrix kernels. J Phys Conf Ser 16:521–530Williams S, Oliker L, Vuduc R, Shalf J, Yelick K, Demmel J (2007) Optimization of sparse matrix–vector multiplication on emerging multicore platforms. In: SC ’07: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, pp 1–12Elafrou A, Goumas G, Koziris N (2017) Performance analysis and optimization of sparse matrix–vector multiplication on modern multi- and many-core processors. In: 2017 46th International Conference on Parallel Processing (ICPP), pp 292–301Li S, Chang H, Zhang J, Zhang Y (2015) Automatic tuning of sparse matrix–vector multiplication on multicore clusters. Sci China Inf Sci 58(9):1–14Guo P, Wang L (2015) Accurate cross-architecture performance modeling for sparse matri–vector multiplication (SpMV) on GPUs. Concurr Comput Pract Exp 27(13):3281–3294Li K, Yang W, Li K (2015) Performance analysis and optimization for SpMV on GPU using probabilistic modeling. IEEE Trans Parallel Distrib Syst 26(1):196–205Eijkhout V, Pozo R (1994) Data structures and algorithms for distributed sparse matrix operations. Technical reportGu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B, Liu T, Wang X, Wang G, Cai J, Chen T (2018) Recent advances in convolutional neural networks. Pattern Recognit 77(C):354–377Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: Gordon G, Dunson D, Dudík M (eds) Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research. Fort Lauderdale, FL, USA, 11–13. PMLR, pp 315–323Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, Volume 37 (ICML’15). JMLR org, pp 448–456Keras: The Python Deep Learning library. https://keras.io/. Accessed Dec 2019TensorFlow, an open source machine learning library for research and production. https://www.tensorflow.org/. Accessed Dec 2019Keras + Hyperopt: a very simple wrapper for convenient hyperparameter optimization. http://maxpumperla.com/hyperas/. Accessed Dec 2019Bergstra J, Komer B, Eliasmith C, Yamins D, Cox D (2015) Hyperopt: a python library for model selection and hyperparameter optimization. Comput Sci Discov. https://doi.org/10.1088/1749-4699/8/1/014008Bergstra J, Yamins D, Cox DD (2013) Making a science of model search: hyperparameter optimization in hundreds of dimensions for vision architectures. In: Proceedings of the 30th International Conference on International Conference on Machine Learning—Volume 28, ICML’13. JMLR.org, pp I–115–I–123SuiteSparse Matrix Collection. https://sparse.tamu.edu/. Accessed Dec 2019Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer, BerlinPan SJ, Yang Qiang (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–44 05Götz M, Anzt H (2018) Machine learning-aided numerical linear algebra: convolutional neural networks for the efficient preconditioner generation. In: Procs of ScalA’18: 9th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems, WS at Supercomputing 2018, 11Zhao Y, Li J, Liao C, Shen X (2018) Bridging the gap between deep learning and sparse matrix format selection. SIGPLAN Not 53(1):94–108Cui H, Hirasawa S, Kobayashi H, Takizawa H (2018) A machine learning-based approach for selecting SpMV kernels and matrix storage formats. IEICE Trans Inf Syst E101.D(9):2307–2314Nisa I, Siegel C, Rajam AS, Vishnu A, Sadayappan P (2018) Effective machine learning based format selection and performance modeling for SpMV on GPUs. EasyChair Preprint no. 388, EasyChairTiwari A, Laurenzano MA, Carrington L, Snavely A (2012) Modeling power and energy usage of HPC kernels. In: 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops PhD Forum, pp 990–998Benatia A, Ji W, Wang Y, Shi F (2016) Machine learning approach for the predicting performance of SpMV on GPU. In: 2016 IEEE 22nd International Conference on Parallel and Distributed Systems (ICPADS), pp 894–90

    Adapting concurrency throttling and voltage–frequency scaling for dense eigensolvers

    Get PDF
    We analyze power dissipation and energy consumption during the execution of high-performance dense linear algebra kernels on multi-core processors. On top of this analysis, we propose and evaluate several strategies to adapt concurrency throttling and the voltage–frequency setting in order to obtain an energy-efficient execution of LAPACK’s routine dsytrd. Our strategies take into account the differences between the memory-bound and CPU-bound kernels that govern this routine, and whether problem data fits into the processor’s last level cache.This work was supported by the CICYT Project TIN2011-23283 of MINECO and FEDER, the EU Project FP7 318793 “EXA2GREEN”, and the FPU program of the Ministerio de Educación, Cultura y Deporte

    Energy-Aware High Performance Computing

    Get PDF
    High performance computing centres consume substantial amounts of energy to power large-scale supercomputers and the necessary building and cooling infrastructure. Recently, considerable performance gains resulted predominantly from developments in multi-core, many-core and accelerator technology. Computing centres rapidly adopted this hardware to serve the increasing demand for computational power. However, further performance increases in large-scale computing systems are limited by the aggregate energy budget required to operate them. Power consumption has become a major cost factor for computing centres. Furthermore, energy consumption results in carbon dioxide emissions, a hazard for the environment and public health; and heat, which reduces the reliability and lifetime of hardware components. Energy efficiency is therefore crucial in high performance computing

    Anti-retinal IgG antibodies in patients with early and advanced type 2 macular telangiectasia

    Get PDF
    Type 2 idiopathic macular telangiectasia (MacTel-2) is a progressive adult-onset macular disease associated with bilateral perifoveal vascular changes, Muller cell degeneration and increased blood-retinal barrier permeability. The pathophysiological mechanisms of MacTel-2 remain unclear, however it was previously reported that anti-retinal antibodies in MacTel-2 patients are a significant feature of the disease. In this study, we aimed to compare the prevalence of anti-retinal antibodies in patients MacTel-2, healthy controls and patients with other retinal diseases. MacTel-2 patients diagnosed with multimodal imaging were enrolled and their disease severities were graded using spectral-domain optical coherence tomography. For comparison, patients with age-related macular degeneration (AMD), inherited retinal diseases (IRDs) or no retinal disease (healthy controls) were recruited as controls. Blood serum samples were screened for immunoglobulin G anti-retinal antibodies by western blotting, followed by densitometry analysis. Odds ratios (OR) with 95% confidence intervals (CI) were calculated and p < 0.05 considered statistically significant. Overall, anti-retinal antibody-positive cases were older (64 ± 15 vs 53 ± 17 years, p < 0.001) and females were more likely to develop anti-retinal antibodies (OR: 2.41, CI: 1.12–5.18). The frequency of anti-retinal antibody detection in MacTel-2 patients (n = 42, 36%) was not significantly different from healthy controls (n = 52, 25%) or IRD patients (n = 18, 25%) and the majority of MacTel-2 patients had no anti-retinal antibodies. In contrast, the frequency of anti-retinal antibody detection was significantly higher in patients with AMD (n = 15, 73%, p < 0.001). The lack of a greater anti-retinal antibody frequency or specificity in the MacTel-2 cohort suggests that antibody mediated immunological mechanisms may play a less significant role in MacTel-2 disease pathogenesis. © 2022 The Author
    corecore