17,035 research outputs found
Towards an Achievable Performance for the Loop Nests
Numerous code optimization techniques, including loop nest optimizations,
have been developed over the last four decades. Loop optimization techniques
transform loop nests to improve the performance of the code on a target
architecture, including exposing parallelism. Finding and evaluating an
optimal, semantic-preserving sequence of transformations is a complex problem.
The sequence is guided using heuristics and/or analytical models and there is
no way of knowing how close it gets to optimal performance or if there is any
headroom for improvement. This paper makes two contributions. First, it uses a
comparative analysis of loop optimizations/transformations across multiple
compilers to determine how much headroom may exist for each compiler. And
second, it presents an approach to characterize the loop nests based on their
hardware performance counter values and a Machine Learning approach that
predicts which compiler will generate the fastest code for a loop nest. The
prediction is made for both auto-vectorized, serial compilation and for
auto-parallelization. The results show that the headroom for state-of-the-art
compilers ranges from 1.10x to 1.42x for the serial code and from 1.30x to
1.71x for the auto-parallelized code. These results are based on the Machine
Learning predictions.Comment: Accepted at the 31st International Workshop on Languages and
Compilers for Parallel Computing (LCPC 2018
Architecture-Aware Configuration and Scheduling of Matrix Multiplication on Asymmetric Multicore Processors
Asymmetric multicore processors (AMPs) have recently emerged as an appealing
technology for severely energy-constrained environments, especially in mobile
appliances where heterogeneity in applications is mainstream. In addition,
given the growing interest for low-power high performance computing, this type
of architectures is also being investigated as a means to improve the
throughput-per-Watt of complex scientific applications.
In this paper, we design and embed several architecture-aware optimizations
into a multi-threaded general matrix multiplication (gemm), a key operation of
the BLAS, in order to obtain a high performance implementation for ARM
big.LITTLE AMPs. Our solution is based on the reference implementation of gemm
in the BLIS library, and integrates a cache-aware configuration as well as
asymmetric--static and dynamic scheduling strategies that carefully tune and
distribute the operation's micro-kernels among the big and LITTLE cores of the
target processor. The experimental results on a Samsung Exynos 5422, a
system-on-chip with ARM Cortex-A15 and Cortex-A7 clusters that implements the
big.LITTLE model, expose that our cache-aware versions of gemm with asymmetric
scheduling attain important gains in performance with respect to its
architecture-oblivious counterparts while exploiting all the resources of the
AMP to deliver considerable energy efficiency
Recommended from our members
Automation of Determination of Optimal Intra-Compute Node Parallelism
Maximizing the productivity of modern multicore and manycore chips requires optimizing parallelism at the compute node level. This is, however, a complex multi-step process. It is an iterative method requiring determining optimal degrees of parallel scalability and optimizing memory access behavior. Further, there are multiple cases to be considered, programs which use only MPI or OpenMP and hybrid (MPI +OpenMP) programs. This paper presents a set of three coordinated workflows for determining the optimal parallelism at the program level for MPI programs and at the loop level for hybrid (MPI+OpenMP) cases. The paper also details mostly automated implementations of these workflows using the PerfExpert infrastructure. Finally the paper presents case studies demonstrating both the applicability and the effectiveness of optimizing parallelism at the compute node level. The results shown in the paper will provide valuable information to further advance in the full automation of the workflows. The software implementing the parallelism scalability optimization is open source and available for download.Texas Advanced Computing Center (TACC)Computer Science
Large scale ab initio calculations based on three levels of parallelization
We suggest and implement a parallelization scheme based on an efficient
multiband eigenvalue solver, called the locally optimal block preconditioned
conjugate gradient LOBPCG method, and using an optimized three-dimensional (3D)
fast Fourier transform (FFT) in the ab initio}plane-wave code ABINIT. In
addition to the standard data partitioning over processors corresponding to
different k-points, we introduce data partitioning with respect to blocks of
bands as well as spatial partitioning in the Fourier space of coefficients over
the plane waves basis set used in ABINIT. This k-points-multiband-FFT
parallelization avoids any collective communications on the whole set of
processors relying instead on one-dimensional communications only. For a single
k-point, super-linear scaling is achieved for up to 100 processors due to an
extensive use of hardware optimized BLAS, LAPACK, and SCALAPACK routines,
mainly in the LOBPCG routine. We observe good performance up to 200 processors.
With 10 k-points our three-way data partitioning results in linear scaling up
to 1000 processors for a practical system used for testing.Comment: 8 pages, 5 figures. Accepted to Computational Material Scienc
Parallelization Strategies for Density Matrix Renormalization Group Algorithms on Shared-Memory Systems
Shared-memory parallelization (SMP) strategies for density matrix
renormalization group (DMRG) algorithms enable the treatment of complex systems
in solid state physics. We present two different approaches by which
parallelization of the standard DMRG algorithm can be accomplished in an
efficient way. The methods are illustrated with DMRG calculations of the
two-dimensional Hubbard model and the one-dimensional Holstein-Hubbard model on
contemporary SMP architectures. The parallelized code shows good scalability up
to at least eight processors and allows us to solve problems which exceed the
capability of sequential DMRG calculations.Comment: 18 pages, 9 figure
- …