Distributed artificial intelligence (AI) has recently accomplished tremendous
breakthroughs in various communication services, ranging from fault-tolerant
factory automation to smart cities. When distributed learning is run over a set
of wireless connected devices, random channel fluctuations, and the incumbent
services simultaneously running on the same network affect the performance of
distributed learning. In this paper, we investigate the interplay between
distributed AI workflow and ultra-reliable low latency communication (URLLC)
services running concurrently over a network. Using 3GPP compliant simulations
in a factory automation use case, we show the impact of various distributed AI
settings (e.g., model size and the number of participating devices) on the
convergence time of distributed AI and the application layer performance of
URLLC. Unless we leverage the existing 5G-NR quality of service handling
mechanisms to separate the traffic from the two services, our simulation
results show that the impact of distributed AI on the availability of the URLLC
devices is significant. Moreover, with proper setting of distributed AI (e.g.,
proper user selection), we can substantially reduce network resource
utilization, leading to lower latency for distributed AI and higher
availability for the URLLC users. Our results provide important insights for
future 6G and AI standardization.Comment: Accepted in 2022 IEEE Global Communications Conference (GLOBECOM