2 research outputs found

    Multi-objective Optimal Control for Proactive Decision Making with Temporal Logic Models

    No full text
    The operation of today’s robots entails interactions with humans, in settings ranging from autonomous driving amidst human-driven vehicles to collaborative manufacturing. To effectively do so, robots must proactively decode the intent or plan of humans and concurrently leverage such a knowledge for safe, cooperative task satisfaction—a problem we refer to as proactive decision making. However, the problem of proactive intent decoding coupled with robotic control is computationally intractable as a robot must reason over several possible human behavioral models and resulting high-dimensional state trajectories. In this paper, we address the proactive decision making problem using a novel combination of algorithmic and data mining techniques. First, we distill high-dimensional state trajectories of human-robot interaction into concise, symbolic behavioral summaries that can be learned from data. Second, we leverage formal methods to model high-level agent goals, safe interaction, and information-seeking behavior with temporal logic formulae. Finally, we design a novel decision-making scheme that simply maintains a belief distribution over high-level, symbolic models of human behavior, and proactively plans informative control actions. Leveraging a rich dataset of real human driving data in crowded merging scenarios, we generate temporal logic models and use them to synthesize control strategies using tree-based value iteration and reinforcement learning (RL). Results from cooperative and adversarial simulated self-driving car scenarios demonstrate that our data-driven control strategies enable safe interaction, correct model identification, and significant dimensionality reduction
    corecore