With machine learning applications now spanning a variety of computational
tasks, multi-user shared computing facilities are devoting a rapidly increasing
proportion of their resources to such algorithms. Graph neural networks (GNNs),
for example, have provided astounding improvements in extracting complex
signatures from data and are now widely used in a variety of applications, such
as particle jet classification in high energy physics (HEP). However, GNNs also
come with an enormous computational penalty that requires the use of GPUs to
maintain reasonable throughput. At shared computing facilities, such as those
used by physicists at Fermi National Accelerator Laboratory (Fermilab),
methodical resource allocation and high throughput at the many-user scale are
key to ensuring that resources are being used as efficiently as possible. These
facilities, however, primarily provide CPU-only nodes, which proves detrimental
to time-to-insight and computational throughput for workflows that include
machine learning inference. In this work, we describe how a shared computing
facility can use the NVIDIA Triton Inference Server to optimize its resource
allocation and computing structure, recovering high throughput while scaling
out to multiple users by massively parallelizing their machine learning
inference. To demonstrate the effectiveness of this system in a realistic
multi-user environment, we use the Fermilab Elastic Analysis Facility augmented
with the Triton Inference Server to provide scalable and high throughput access
to a HEP-specific GNN and report on the outcome.Comment: 20 pages, 14 figures, submitted to "Computing and Software for Big
Science