240 research outputs found
Large-Scale Discrete Fourier Transform on TPUs
In this work, we present two parallel algorithms for the large-scale discrete
Fourier transform (DFT) on Tensor Processing Unit (TPU) clusters. The two
parallel algorithms are associated with two formulations of DFT: one is based
on the Kronecker product, to be specific, dense matrix multiplications between
the input data and the Vandermonde matrix, denoted as KDFT in this work; the
other is based on the famous Cooley-Tukey algorithm and phase adjustment,
denoted as FFT in this work. Both KDFT and FFT formulations take full advantage
of TPU's strength in matrix multiplications. The KDFT formulation allows direct
use of nonuniform inputs without additional step. In the two parallel
algorithms, the same strategy of data decomposition is applied to the input
data. Through the data decomposition, the dense matrix multiplications in KDFT
and FFT are kept local within TPU cores, which can be performed completely in
parallel. The communication among TPU cores is achieved through the one-shuffle
scheme in both parallel algorithms, with which sending and receiving data takes
place simultaneously between two neighboring cores and along the same direction
on the interconnect network. The one-shuffle scheme is designed for the
interconnect topology of TPU clusters, minimizing the time required by the
communication among TPU cores. Both KDFT and FFT are implemented in TensorFlow.
The three-dimensional complex DFT is performed on an example of dimension with a full TPU Pod: the run time of KDFT is 12.66
seconds and that of FFT is 8.3 seconds. Scaling analysis is provided to
demonstrate the high parallel efficiency of the two DFT implementations on
TPUs
FNet: Mixing Tokens with Fourier Transforms
We show that Transformer encoder architectures can be massively sped up, with
limited accuracy costs, by replacing the self-attention sublayers with simple
linear transformations that "mix" input tokens. These linear transformations,
along with standard nonlinearities in feed-forward layers, prove competent at
modeling semantic relationships in several text classification tasks. Most
surprisingly, we find that replacing the self-attention sublayer in a
Transformer encoder with a standard, unparameterized Fourier Transform achieves
92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains
nearly seven times faster on GPUs and twice as fast on TPUs. The resulting
model, FNet, also scales very efficiently to long inputs. Specifically, when
compared to the "efficient" Transformers on the Long Range Arena benchmark,
FNet matches the accuracy of the most accurate models, but is faster than the
fastest models across all sequence lengths on GPUs (and across relatively
shorter lengths on TPUs). Finally, FNet has a light memory footprint and is
particularly efficient at smaller model sizes: for a fixed speed and accuracy
budget, small FNet models outperform Transformer counterparts
TensorFlow Doing HPC
TensorFlow is a popular emerging open-source programming framework supporting
the execution of distributed applications on heterogeneous hardware. While
TensorFlow has been initially designed for developing Machine Learning (ML)
applications, in fact TensorFlow aims at supporting the development of a much
broader range of application kinds that are outside the ML domain and can
possibly include HPC applications. However, very few experiments have been
conducted to evaluate TensorFlow performance when running HPC workloads on
supercomputers. This work addresses this lack by designing four traditional HPC
benchmark applications: STREAM, matrix-matrix multiply, Conjugate Gradient (CG)
solver and Fast Fourier Transform (FFT). We analyze their performance on two
supercomputers with accelerators and evaluate the potential of TensorFlow for
developing HPC applications. Our tests show that TensorFlow can fully take
advantage of high performance networks and accelerators on supercomputers.
Running our TensorFlow STREAM benchmark, we obtain over 50% of theoretical
communication bandwidth on our testing platform. We find an approximately 2x,
1.7x and 1.8x performance improvement when increasing the number of GPUs from
two to four in the matrix-matrix multiply, CG and FFT applications
respectively. All our performance results demonstrate that TensorFlow has high
potential of emerging also as HPC programming framework for heterogeneous
supercomputers.Comment: Accepted for publication at The Ninth International Workshop on
Accelerators and Hybrid Exascale Systems (AsHES'19
A Computational Model for Tensor Core Units
To respond to the need of efficient training and inference of deep neural
networks, a plethora of domain-specific hardware architectures have been
introduced, such as Google Tensor Processing Units and NVIDIA Tensor Cores. A
common feature of these architectures is a hardware circuit for efficiently
computing a dense matrix multiplication of a given small size. In order to
broaden the class of algorithms that exploit these systems, we propose a
computational model, named the TCU model, that captures the ability to natively
multiply small matrices. We then use the TCU model for designing fast
algorithms for several problems, including matrix operations (dense and sparse
multiplication, Gaussian Elimination), graph algorithms (transitive closure,
all pairs shortest distances), Discrete Fourier Transform, stencil
computations, integer multiplication, and polynomial evaluation. We finally
highlight a relation between the TCU model and the external memory model
Recommended from our members
THE ROLE OF CHAIN CONFIGURATION IN GOVERNING THE RATIONAL DESIGN OF POLYMERS FOR ADHESION
ABSTRACT THE ROLE OF CHAIN CONFIGURATION IN GOVERNING THE RATIONAL DESIGN OF POLYMERS FOR ADHESION
SEPTEMBER 2017
ONYENKACHI C. WAMUO, B.Eng., FEDERAL UNIVERSITY OF TECHNOLOGY, OWERRI (FUTO), NIGERIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST
Ph.D., UNIVERSITY OF MASSACHUSETTS AMHERST
Directed by: Professor Shaw Ling Hsu
The chain configurational control of polymers used in adhesion can be utilized as a means of tuning the cohesive properties of hot melt adhesives (HMAs). The cohesive properties control the solidification, strength, setting speed. Propylene-Ethylene copolymers (PP-co-PE) and thermoplastic polyurethanes (TPUs) were studied. In the first project, the effects of sequence distribution of the two types of the (PP-PE) copolymers, with propylene being the dominant component, on the associated crystallization behavior were analyzed. The average sequence lengths of the crystallizable propylene sequences in these copolymers are different, although the ethylene content was virtually identical. In one circumstance, the chain configuration was completely random with crystallizable propylene sequences following Bernouillian statistics. In the other case, we have used a bimodal distribution of crystallizable sequences. The crystallization kinetics, the crystallization temperature, and the degree of crystallinity were significantly higher for the latter sample as compared to the former. When crystallizing from the melt, the longest crystallizable propylene sequences crystallized first at any supercooling, thus controlling the segmental mobility of other segments in the distribution. This is especially evident in copolymers with the bimodal segmental distribution. The distribution of crystallizable polypropylene sequences also controls the size distribution and thermal stability of the crystallites formed. The elucidation of the crystallization behavior of these copolymers is crucial in defining the application driven setting speeds of hot melt adhesives, the principal application of interest in our laboratory. Due to its polarity and thermoplastic nature of its structure, TPUs can be used advantageously for binding a variety of substrates. The challenge with current polyurethanes based on conventional 1,4-butanediols is the long time dependency taken for their morphology and properties to set. These slow dynamics is unfavorable in HMAs where fast setting speeds are necessary and responsible for their widespread use in packaging applications. We hypothesize that the increased mobility and flexibility of the traditional 1,4-butanediol system enables slow morphology development in the traditional TPUs. We have therefore changed the mobility of the chain extenders by using a 1,2-propanediol chain extender which incorporates a methyl pendant group into the TPU structure. The presence of the pendant groups in this system incorporates rigidity to the chain extender and makes the HS made from it lack the mobility to move away from the SS matrix. We have shown this to be vital in creating stable domains whose properties do not change over time. Using DSC as well as LFNMR we established mobility differences between the symmetric and asymmetric chain extenders. Temporal DSC and FTIR were used to show the stable and time-independent morphologies associated with the 1,2-propanediol chain extender. In this study, we have achieved chain configurational control by changing the architecture of the chain extender. This concept of chain configurational control using chain extenders is highly useful in controlling the set speed for HMAs
- …