3 research outputs found

    Using Learning to Control Artificial Avatars in Human Motor Coordination Tasks

    No full text
    Designing artificial avatars able to interact with humans in a safe, smart, and natural way is a current open problem in control. Solving such an issue will allow the design of cyber-agents capable of cooperatively interacting with people in order to fulfil common joint tasks in a multitude of different applications. This is particularly relevant in the context of healthcare applications. Indeed, the use for rehabilitation has been proposed of artificial agents able to interact and coordinate their movements with those of patients suffering from social or motor disorders. Moreover, it has also been shown that the level of motor coordination between the avatar and the human patient is enhanced if the kinematic properties of the avatar's motion are similar to those of the individual it is interacting with. In this article, we discuss, first, a new method based on Markov chains to confer 'human motor characteristics' on the motion of a virtual agent so that it can coordinate its motion with that of a target individual while exhibiting specific kinematic properties. Then, we embed such synthetic model in a novel control architecture based on reinforcement learning to synthesize a cyber-agent able to mimic the behavior of a specific human performing a joint motor task with one or more individuals
    corecore