Integrating Recognition and Decision Making to Close the Interaction Loop for Autonomous Systems

Abstract

Intelligent systems are becoming increasingly ubiquitous in daily life. Mobile devices are providing machine-generated support to users, robots are coming out of their cages in manufacturing to interact with co-workers, and cars with various degrees of self-driving capabilities operate amongst pedestrians and the driver. However, these interactive intelligent systems\u27 effectiveness depends on their understanding and recognition of human activities and goals, as well as their responses to people in a timely manner. The average person does not follow instructions step-by-step or act in a formulaic manner, but instead varies the order of actions and timing when performing a given task. People explore their surroundings, make mistakes, and may interrupt an activity to handle more urgent matters. The decisions that an autonomous intelligent system makes should account for such noise and variance regardless of the form of interaction, which includes adapting action choices and possibly its own goals.While most people take these aspects of interaction for granted, they are complex and involve many specific tasks that have primarily been studied independently within artificial intelligence. This results in open-loop interactive experiences where the user must perform a fixed input command or the intelligent system performs a hard-coded output response---one of the components of the interaction cannot adapt with respect to the other for longer-term back-and-forth interactions. This dissertation explores how developments in plan recognition, activity recognition, intent recognition, and autonomous planning can work together to develop more adaptive interactive experiences between autonomous intelligent systems and the people around them. In particular, we consider a unifying perspective of recognition algorithms that provides sufficient information to dynamically produce short-term automated planning problems, and we present ways to run these algorithms faster for the real-time needs of interaction. This exploration leads to the introduction of the Planning and Recognition Together Close the Interaction Loop (PReTCIL) framework that serves as a first step towards identifying how we can address the problem of closing the interaction loop, in addition to new questions that need to be considered

    Similar works