3 research outputs found
Recommended from our members
Mitigating Inter-Job Interference via Process-Level Quality-of-Service
Jobs on most high-performance computing (HPC) systems share the network with other concurrently executing jobs. Network sharing leads to contention that can severely degrade performance. This article investigates the use of Quality of Service (QoS) mechanisms to reduce the negative impacts of network contention. QoS allows users to manage resource sharing between network flows and to provide bandwidth guarantees to specific flows. Our results show that careful use of QoS reduces the impact of network contention for specific jobs, resulting in up to a 40% performance improvement. In some cases, it completely eliminates the impact of contention. It achieves these improvements with limited negative impact to other jobs; any job that experiences performance loss typically degrades less than 5%, and often much less. Our approach can help ensure that HPC machines maintain high levels of throughput as per-node compute power continues to increase faster than network bandwidth.Lawrence Livermore National LaboratoryThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Mitigating Inter-Job Interference via Process-Level Quality-of-Service
Jobs on most high-performance computing (HPC) systems share the network with other concurrently executing jobs. This sharing creates contention that can severely degrade performance. We investigate the use of Quality of Service (QoS) mechanisms to reduce the negative impacts of network contention. Our results show that careful use of QoS reduces the impact of contention for specific jobs, resulting in up to a 27% performance improvement. In some cases the impact of contention is completely eliminated. These improvements are achieved with limited negative impact to other jobs; any job that experiences performance loss typically degrades less than 5%, often much less. Our approach can help ensure that HPC machines maintain high throughput as per-node compute power continues to increase faster than network bandwidth.This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]