In cooperative multi-agent reinforcement learning, centralized training and
decentralized execution (CTDE) has achieved remarkable success. Individual
Global Max (IGM) decomposition, which is an important element of CTDE, measures
the consistency between local and joint policies. The majority of IGM-based
research focuses on how to establish this consistent relationship, but little
attention has been paid to examining IGM's potential flaws. In this work, we
reveal that the IGM condition is a lossy decomposition, and the error of lossy
decomposition will accumulated in hypernetwork-based methods. To address the
above issue, we propose to adopt an imitation learning strategy to separate the
lossy decomposition from Bellman iterations, thereby avoiding error
accumulation. The proposed strategy is theoretically proved and empirically
verified on the StarCraft Multi-Agent Challenge benchmark problem with zero
sight view. The results also confirm that the proposed method outperforms
state-of-the-art IGM-based approaches.Comment: Accept at NeurIPS 202