Real-world multi-agent tasks usually involve dynamic team composition with
the emergence of roles, which should also be a key to efficient cooperation in
multi-agent reinforcement learning (MARL). Drawing inspiration from the
correlation between roles and agent's behavior patterns, we propose a novel
framework of **A**ttention-guided **CO**ntrastive **R**ole representation
learning for **M**ARL (**ACORM**) to promote behavior heterogeneity, knowledge
transfer, and skillful coordination across agents. First, we introduce mutual
information maximization to formalize role representation learning, derive a
contrastive learning objective, and concisely approximate the distribution of
negative pairs. Second, we leverage an attention mechanism to prompt the global
state to attend to learned role representations in value decomposition,
implicitly guiding agent coordination in a skillful role space to yield more
expressive credit assignment. Experiments on challenging StarCraft II
micromanagement and Google research football tasks demonstrate the
state-of-the-art performance of our method and its advantages over existing
approaches. Our code is available at
[https://github.com/NJU-RL/ACORM](https://github.com/NJU-RL/ACORM)