Safe and Efficient Robot Action Choice Using Human Intent Prediction in Physically-Shared Space Environments.

Abstract

Emerging robotic systems are capable of autonomously planning and executing well-defined tasks, particularly when the environment can be accurately modeled. Robots supporting human space exploration must be able to safely interact with human astronaut companions during intravehicular and extravehicular activities. Given a shared workspace, efficiency can be gained by leveraging robotic awareness of its human companion. This dissertation presents a modular architecture that allows a human and robotic manipulator to efficiently complete independent sets of tasks in a shared physical workspace without the robot requiring oversight or situational awareness from its human companion. We propose that a robot requires four capabilities to act safely and optimally with awareness of its companion: sense the environment and the human within it; translate sensor data into a form useful for decision-making; use this data to predict the human’s future intent; and then use this information to inform its action-choice based also on the robot’s goals and safety constraints. We first present a series of human subject experiments demonstrating that human intent can help a robot predict and avoid conflict, and that sharing the workspace need not degrade human performance so long as the manipulator does not distract or introduce conflict. We describe an architecture that relies on Markov Decision Processes (MDPs) to support robot decision-making. A key contribution of our architecture is its decomposition of the decision problem into two parts: human intent prediction (HIP) and robot action choice (RAC). This decomposition is made possible by an assumption that the robot’s actions will not influence human intent. Presuming an observer that can feedback human actions in real-time, we leverage the well-known space environment and task scripts astronauts rehearse in advance to devise models for human intent prediction and robot action choice. We describe a series of case studies for HIP and RAC using a minimal set of state attributes, including an abbreviated action-history. MDP policies are evaluated in terms of model fitness and safety/efficiency performance tradeoffs. Simulation results indicate that incorporation of both observed and predicted human actions improves robot action choice. Future work could extend to more general human-robot interaction.PhDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/107160/1/cmcghan_1.pd

    Similar works