11 research outputs found
Enabling Massive Deep Neural Networks with the GraphBLAS
Deep Neural Networks (DNNs) have emerged as a core tool for machine learning.
The computations performed during DNN training and inference are dominated by
operations on the weight matrices describing the DNN. As DNNs incorporate more
stages and more nodes per stage, these weight matrices may be required to be
sparse because of memory limitations. The GraphBLAS.org math library standard
was developed to provide high performance manipulation of sparse weight
matrices and input/output vectors. For sufficiently sparse matrices, a sparse
matrix library requires significantly less memory than the corresponding dense
matrix implementation. This paper provides a brief description of the
mathematics underlying the GraphBLAS. In addition, the equations of a typical
DNN are rewritten in a form designed to use the GraphBLAS. An implementation of
the DNN is given using a preliminary GraphBLAS C library. The performance of
the GraphBLAS implementation is measured relative to a standard dense linear
algebra library implementation. For various sizes of DNN weight matrices, it is
shown that the GraphBLAS sparse implementation outperforms a BLAS dense
implementation as the weight matrix becomes sparser.Comment: 10 pages, 7 figures, to appear in the 2017 IEEE High Performance
Extreme Computing (HPEC) conferenc
Training Behavior of Sparse Neural Network Topologies
Improvements in the performance of deep neural networks have often come
through the design of larger and more complex networks. As a result, fast
memory is a significant limiting factor in our ability to improve network
performance. One approach to overcoming this limit is the design of sparse
neural networks, which can be both very large and efficiently trained. In this
paper we experiment training on sparse neural network topologies. We test
pruning-based topologies, which are derived from an initially dense network
whose connections are pruned, as well as RadiX-Nets, a class of network
topologies with proven connectivity and sparsity properties. Results show that
sparse networks obtain accuracies comparable to dense networks, but extreme
levels of sparsity cause instability in training, which merits further study.Comment: 6 pages. Presented at the 2019 IEEE High Performance Extreme
Computing (HPEC) Conference. Received "Best Paper" awar
RadiX-Net: Structured Sparse Matrices for Deep Neural Networks
The sizes of deep neural networks (DNNs) are rapidly outgrowing the capacity
of hardware to store and train them. Research over the past few decades has
explored the prospect of sparsifying DNNs before, during, and after training by
pruning edges from the underlying topology. The resulting neural network is
known as a sparse neural network. More recent work has demonstrated the
remarkable result that certain sparse DNNs can train to the same precision as
dense DNNs at lower runtime and storage cost. An intriguing class of these
sparse DNNs is the X-Nets, which are initialized and trained upon a sparse
topology with neither reference to a parent dense DNN nor subsequent pruning.
We present an algorithm that deterministically generates RadiX-Nets: sparse DNN
topologies that, as a whole, are much more diverse than X-Net topologies, while
preserving X-Nets' desired characteristics. We further present a
functional-analytic conjecture based on the longstanding observation that
sparse neural network topologies can attain the same expressive power as dense
counterpartsComment: 7 pages, 8 figures, accepted at IEEE IPDPS 2019 GrAPL workshop. arXiv
admin note: substantial text overlap with arXiv:1809.0524
Streaming 1.9 Billion Hypersparse Network Updates per Second with D4M
The Dynamic Distributed Dimensional Data Model (D4M) library implements
associative arrays in a variety of languages (Python, Julia, and Matlab/Octave)
and provides a lightweight in-memory database implementation of hypersparse
arrays that are ideal for analyzing many types of network data. D4M relies on
associative arrays which combine properties of spreadsheets, databases,
matrices, graphs, and networks, while providing rigorous mathematical
guarantees, such as linearity. Streaming updates of D4M associative arrays put
enormous pressure on the memory hierarchy. This work describes the design and
performance optimization of an implementation of hierarchical associative
arrays that reduces memory pressure and dramatically increases the update rate
into an associative array. The parameters of hierarchical associative arrays
rely on controlling the number of entries in each level in the hierarchy before
an update is cascaded. The parameters are easily tunable to achieve optimal
performance for a variety of applications. Hierarchical arrays achieve over
40,000 updates per second in a single instance. Scaling to 34,000 instances of
hierarchical D4M associative arrays on 1,100 server nodes on the MIT SuperCloud
achieved a sustained update rate of 1,900,000,000 updates per second. This
capability allows the MIT SuperCloud to analyze extremely large streaming
network data sets.Comment: 6 pages; 6 figures; accepted to IEEE High Performance Extreme
Computing (HPEC) Conference 2019. arXiv admin note: text overlap with
arXiv:1807.05308, arXiv:1902.0084
GraphChallenge.org: Raising the Bar on Graph Analytic Performance
The rise of graph analytic systems has created a need for new ways to measure
and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE
Graph Challenge has been developed to provide a well-defined community venue
for stimulating research and highlighting innovations in graph analysis
software, hardware, algorithms, and systems. GraphChallenge.org provides a wide
range of pre-parsed graph data sets, graph generators, mathematically defined
graph algorithms, example serial implementations in a variety of languages, and
specific metrics for measuring performance. Graph Challenge 2017 received 22
submissions by 111 authors from 36 organizations. The submissions highlighted
graph analytic innovations in hardware, software, algorithms, systems, and
visualization. These submissions produced many comparable performance
measurements that can be used for assessing the current state of the art of the
field. There were numerous submissions that implemented the triangle counting
challenge and resulted in over 350 distinct measurements. Analysis of these
submissions show that their execution time is a strong function of the number
of edges in the graph, , and is typically proportional to for
large values of . Combining the model fits of the submissions presents a
picture of the current state of the art of graph analysis, which is typically
edges processed per second for graphs with edges. These results
are times faster than serial implementations commonly used by many graph
analysts and underscore the importance of making these performance benefits
available to the broader community. Graph Challenge provides a clear picture of
current graph analysis systems and underscores the need for new innovations to
achieve high performance on very large graphs.Comment: 7 pages, 6 figures; submitted to IEEE HPEC Graph Challenge. arXiv
admin note: text overlap with arXiv:1708.0686
Fast Mapping onto Census Blocks
Pandemic measures such as social distancing and contact tracing can be
enhanced by rapidly integrating dynamic location data and demographic data.
Projecting billions of longitude and latitude locations onto hundreds of
thousands of highly irregular demographic census block polygons is
computationally challenging in both research and deployment contexts. This
paper describes two approaches labeled "simple" and "fast". The simple approach
can be implemented in any scripting language (Matlab/Octave, Python, Julia, R)
and is easily integrated and customized to a variety of research goals. This
simple approach uses a novel combination of hierarchy, sparse bounding boxes,
polygon crossing-number, vectorization, and parallel processing to achieve
100,000,000+ projections per second on 100 servers. The simple approach is
compact, does not increase data storage requirements, and is applicable to any
country or region. The fast approach exploits the thread, vector, and memory
optimizations that are possible using a low-level language (C++) and achieves
similar performance on a single server. This paper details these approaches
with the goal of enabling the broader community to quickly integrate location
and demographic data.Comment: 8 pages, 7 figures, 55 references; accepted to IEEE HPEC 202