Due to the representation limitation of the joint Q value function,
multi-agent reinforcement learning methods with linear value decomposition
(LVD) or monotonic value decomposition (MVD) suffer from relative
overgeneralization. As a result, they can not ensure optimal consistency (i.e.,
the correspondence between individual greedy actions and the maximal true Q
value). In this paper, we derive the expression of the joint Q value function
of LVD and MVD. According to the expression, we draw a transition diagram,
where each self-transition node (STN) is a possible convergence. To ensure
optimal consistency, the optimal node is required to be the unique STN.
Therefore, we propose the greedy-based value representation (GVR), which turns
the optimal node into an STN via inferior target shaping and further eliminates
the non-optimal STNs via superior experience replay. In addition, GVR achieves
an adaptive trade-off between optimality and stability. Our method outperforms
state-of-the-art baselines in experiments on various benchmarks. Theoretical
proofs and empirical results on matrix games demonstrate that GVR ensures
optimal consistency under sufficient exploration