This paper explores the challenges faced by assistive robots in effectively
cooperating with humans, requiring them to anticipate human behavior, predict
their actions' impact, and generate understandable robot actions. The study
focuses on a use-case involving a user with limited mobility needing assistance
with pouring a beverage, where tasks like unscrewing a cap or reaching for
objects demand coordinated support from the robot. Yet, anticipating the
robot's intentions can be challenging for the user, which can hinder effective
collaboration. To address this issue, we propose an innovative solution that
utilizes Augmented Reality (AR) to communicate the robot's intentions and
expected movements to the user, fostering a seamless and intuitive interaction