2 research outputs found
Optimizing Xeon Phi for Interactive Data Analysis
The Intel Xeon Phi manycore processor is designed to provide high performance
matrix computations of the type often performed in data analysis. Common data
analysis environments include Matlab, GNU Octave, Julia, Python, and R.
Achieving optimal performance of matrix operations within data analysis
environments requires tuning the Xeon Phi OpenMP settings, process pinning, and
memory modes. This paper describes matrix multiplication performance results
for Matlab and GNU Octave over a variety of combinations of process counts and
OpenMP threads and Xeon Phi memory modes. These results indicate that using
KMP_AFFINITY=granlarity=fine, taskset pinning, and all2all cache memory mode
allows both Matlab and GNU Octave to achieve 66% of the practical peak
performance for process counts ranging from 1 to 64 and OpenMP threads ranging
from 1 to 64. These settings have resulted in generally improved performance
across a range of applications and has enabled our Xeon Phi system to deliver
significant results in a number of real-world applications.Comment: 6 pages, 5 figures, accepted in IEEE High Performance Extreme
Computing (HPEC) conference 201
Toward Reliable and Efficient Message Passing Software for HPC Systems: Fault Tolerance and Vector Extension
As the scale of High-performance Computing (HPC) systems continues to grow, researchers are devoted themselves to achieve the best performance of running long computing jobs on these systems. My research focus on reliability and efficiency study for HPC software.
First, as systems become larger, mean-time-to-failure (MTTF) of these HPC systems is negatively impacted and tends to decrease. Handling system failures becomes a prime challenge. My research aims to present a general design and implementation of an efficient runtime-level failure detection and propagation strategy targeting large-scale, dynamic systems that is able to detect both node and process failures. Using multiple overlapping topologies to optimize the detection and propagation, minimizing the incurred overhead sand guaranteeing the scalability of the entire framework. Results from different machines and benchmarks compared to related works shows that my design and implementation outperforms non-HPC solutions significantly, and is competitive with specialized HPC solutions that can manage only MPI applications.
Second, I endeavor to implore instruction level parallelization to achieve optimal performance. Novel processors support long vector extensions, which enables researchers to exploit the potential peak performance of target architectures. Intel introduced Advanced Vector Extension (AVX512 and AVX2) instructions for x86 Instruction Set Architecture (ISA). Arm introduced Scalable Vector Extension (SVE) with a new set of A64 instructions. Both enable greater parallelisms. My research utilizes long vector reduction instructions to improve the performance of MPI reduction operations. Also, I use gather and scatter feature to speed up the packing and unpacking operation in MPI. The evaluation of the resulting software stack under different scenarios demonstrates that the approach is not only efficient but also generalizable to many vector architecture and efficient