8,367 research outputs found
MuMax: a new high-performance micromagnetic simulation tool
We present MuMax, a general-purpose micromagnetic simulation tool running on
Graphical Processing Units (GPUs). MuMax is designed for high performance
computations and specifically targets large simulations. In that case speedups
of over a factor 100x can easily be obtained compared to the CPU-based OOMMF
program developed at NIST. MuMax aims to be general and broadly applicable. It
solves the classical Landau-Lifshitz equation taking into account the
magnetostatic, exchange and anisotropy interactions, thermal effects and
spin-transfer torque. Periodic boundary conditions can optionally be imposed. A
spatial discretization using finite differences in 2 or 3 dimensions can be
employed. MuMax is publicly available as open source software. It can thus be
freely used and extended by community. Due to its high computational
performance, MuMax should open up the possibility of running extensive
simulations that would be nearly inaccessible with typical CPU-based
simulators.Comment: To be published in JMM
Optimizing Memory Efficiency for Convolution Kernels on Kepler GPUs
Convolution is a fundamental operation in many applications, such as computer
vision, natural language processing, image processing, etc. Recent successes of
convolutional neural networks in various deep learning applications put even
higher demand on fast convolution. The high computation throughput and memory
bandwidth of graphics processing units (GPUs) make GPUs a natural choice for
accelerating convolution operations. However, maximally exploiting the
available memory bandwidth of GPUs for convolution is a challenging task. This
paper introduces a general model to address the mismatch between the memory
bank width of GPUs and computation data width of threads. Based on this model,
we develop two convolution kernels, one for the general case and the other for
a special case with one input channel. By carefully optimizing memory access
patterns and computation patterns, we design a communication-optimized kernel
for the special case and a communication-reduced kernel for the general case.
Experimental data based on implementations on Kepler GPUs show that our kernels
achieve 5.16X and 35.5% average performance improvement over the latest cuDNN
library, for the special case and the general case, respectively
- …