'Institute of Electrical and Electronics Engineers (IEEE)'
Abstract
This study explores optimising human-robot trust using reinforcement learning (RL) in simulated environments. Establishing trust in human-robot interaction (HRI) is crucial for effective collaboration, but misaligned trust levels can restrict successful task completion. Current RL approaches mainlyprioritise performance metrics without directly addressing trust management. To bridge this gap, we integrated a validated mathematical trust model into an RL framework and conducted experiments in two simulated environments: Frozen Lake and Battleship. The results showed that the RL model facilitated trust by dynamically adjusting it based on task outcomes, enhancing task performance and reducing the risks of insufficient or extreme trust. Our findings highlight the potential of RL to enhance human-robot collaboration (HRC) and trust calibration in different experimental HRI settings
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.