152 research outputs found
Compression of volume-surface integral equation matrices via Tucker decomposition for magnetic resonance applications
In this work, we propose a method for the compression of the coupling matrix
in volume\hyp surface integral equation (VSIE) formulations. VSIE methods are
used for electromagnetic analysis in magnetic resonance imaging (MRI)
applications, for which the coupling matrix models the interactions between the
coil and the body. We showed that these effects can be represented as
independent interactions between remote elements in 3D tensor formats, and
subsequently decomposed with the Tucker model. Our method can work in tandem
with the adaptive cross approximation technique to provide fast solutions of
VSIE problems. We demonstrated that our compression approaches can enable the
use of VSIE matrices of prohibitive memory requirements, by allowing the
effective use of modern graphical processing units (GPUs) to accelerate the
arising matrix\hyp vector products. This is critical to enable numerical MRI
simulations at clinical voxel resolutions in a feasible computation time. In
this paper, we demonstrate that the VSIE matrix\hyp vector products needed to
calculate the electromagnetic field produced by an MRI coil inside a numerical
body model with mm voxel resolution, could be performed in
seconds in a GPU, after compressing the associated coupling matrix from TB to MB.Comment: 13 pages, 11 figure
On the Compression of Translation Operator Tensors in FMM-FFT-Accelerated SIE Simulators via Tensor Decompositions
Tensor decomposition methodologies are proposed to reduce the memory
requirement of translation operator tensors arising in the fast multipole
method-fast Fourier transform (FMM-FFT)-accelerated surface integral equation
(SIE) simulators. These methodologies leverage Tucker, hierarchical Tucker
(H-Tucker), and tensor train (TT) decompositions to compress the FFT'ed
translation operator tensors stored in three-dimensional (3D) and
four-dimensional (4D) array formats. Extensive numerical tests are performed to
demonstrate the memory saving achieved by and computational overhead introduced
by these methodologies for different simulation parameters. Numerical results
show that the H-Tucker-based methodology for 4D array format yields the maximum
memory saving while Tucker-based methodology for 3D array format introduces the
minimum computational overhead. For many practical scenarios, all methodologies
yield a significant reduction in the memory requirement of translation operator
tensors while imposing negligible/acceptable computational overhead
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
Convolution models with long filters have demonstrated state-of-the-art
reasoning abilities in many long-sequence tasks but lag behind the most
optimized Transformers in wall-clock time. A major bottleneck is the Fast
Fourier Transform (FFT)--which allows long convolutions to run in
time in sequence length but has poor hardware utilization. In this paper,
we study how to optimize the FFT convolution. We find two key bottlenecks: the
FFT does not effectively use specialized matrix multiply units, and it incurs
expensive I/O between layers of the memory hierarchy. In response, we propose
FlashFFTConv. FlashFFTConv uses a matrix decomposition that computes the FFT
using matrix multiply units and enables kernel fusion for long sequences,
reducing I/O. We also present two sparse convolution algorithms--1) partial
convolutions and 2) frequency-sparse convolutions--which can be implemented
simply by skipping blocks in the matrix decomposition, enabling further
opportunities for memory and compute savings. FlashFFTConv speeds up exact FFT
convolutions by up to 7.93 over PyTorch and achieves up to 4.4
speedup end-to-end. Given the same compute budget, FlashFFTConv allows
Hyena-GPT-s to achieve 2.3 points better perplexity on the PILE and
M2-BERT-base to achieve 3.3 points higher GLUE score--matching models with
twice the parameter count. FlashFFTConv also achieves 96.1% accuracy on
Path-512, a high-resolution vision task where no model had previously achieved
better than 50%. Furthermore, partial convolutions enable longer-sequence
models--yielding the first DNA model that can process the longest human genes
(2.3M base pairs)--and frequency-sparse convolutions speed up pretrained models
while maintaining or improving model quality
A review of nonlinear FFT-based computational homogenization methods
Since their inception, computational homogenization methods based on the fast Fourier transform (FFT) have grown in popularity, establishing themselves as a powerful tool applicable to complex, digitized microstructures. At the same time, the understanding of the underlying principles has grown, in terms of both discretization schemes and solution methods, leading to improvements of the original approach and extending the applications. This article provides a condensed overview of results scattered throughout the literature and guides the reader to the current state of the art in nonlinear computational homogenization methods using the fast Fourier transform
- …