Most of agents that learn policy for tasks with reinforcement learning (RL)
lack the ability to communicate with people, which makes human-agent
collaboration challenging. We believe that, in order for RL agents to
comprehend utterances from human colleagues, RL agents must infer the mental
states that people attribute to them because people sometimes infer an
interlocutor's mental states and communicate on the basis of this mental
inference. This paper proposes PublicSelf model, which is a model of a person
who infers how the person's own behavior appears to their colleagues. We
implemented the PublicSelf model for an RL agent in a simulated environment and
examined the inference of the model by comparing it with people's judgment. The
results showed that the agent's intention that people attributed to the agent's
movement was correctly inferred by the model in scenes where people could find
certain intentionality from the agent's behavior