8 research outputs found

    On Legible and Predictable Robot Navigation in Multi-Agent Environments

    Get PDF
    Legibility has recently become an important property to consider in the design of social navigation planners. Legible motion is intent-expressive, which when employed during social robot navigation, allows others to quickly infer the intended avoidance strategy. Predictability, although less commonly studied for social navigation, is, in a sense, the dual notion of legibility, and should also be accounted for in order to promote efficient motions. Predictable motion matches an observer's expectation which, during navigation, allows others to confidently carryout the interaction. In this work, we present a navigation framework capable of reasoning on its legibility and predictability with respect to dynamic interactions, e.g., a passing side. Our approach generalizes the previously formalized notions of legibility and predictability by allowing dynamic goal regions in order to navigate in dynamic environments. This generalization also allows us to quantitatively evaluate the legibility and the predictability of trajectories with respect to navigation interactions. Our approach is shown to promote legible behavior in ambiguous scenarios and predictable behavior in unambiguous scenarios. We also provide an adaptation to the multi-agent case, allowing the robot to reason on its legibility and predictability with respect to multiple interactions simultaneously. This adaptation promotes behaviors that are not illegible to other agents in the environment. In simulation, this is shown to resolve scenarios of high-complexity in an efficient manner. Furthermore, our approach yields an increase in safety while remaining competitive in terms of goal-efficiency when compared to other robot navigation planners in randomly generated multi-agent environments

    Foundations of Human-Aware Planning -- A Tale of Three Models

    Get PDF
    abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Attributed Intelligence

    Get PDF
    Human beings quickly and confidently attribute more or less intelligence to one another. What is meant by intelligence when they do so? And what are the surface features of human behaviour that determine their judgements? Because the judges of success or failure in the quest for `artificial intelligence' will be human, the answers to such questions are an essential part of cognitive science. This thesis studies such questions in the context of a maze world, complex enough to require non-trivial answers, and simple enough to analyse the answers in term of decision-making algorithms. According to Theory-theory, humans comprehend the actions of themselves and of others in terms of beliefs, desires and goals, following rational principles of utility. If so, attributing intelligence may result from an evaluation the agent's efficiency -- how closely its behaviour approximates the expected rational course of action. Alternatively, attributed intelligence could result from observing outcomes: billionaires and presidents are, by definition, intelligent. I applied Bayesian models of planning under uncertainty to data from five behavioural experiments. The results show that while most humans attribute intelligence to efficiency, a minority attributes intelligence to outcome. Understanding of differences in attributed intelligence comes from a study how people plan. Most participants can optimally plan 1-5 decisions in advance. Individually they vary in sensitivity to decision value and in planning depth. Comparing planning performance and attributed intelligence shows that observers' ability to attribute intelligence depends on their ability to plan. People attribute intelligence to efficiency in proportion to their planning ability. The less skilled planners are more likely to attribute intelligence to outcome. Moreover, model-based metrics of planning performance correlate with independent measures of cognitive performance, such as the Cognitive Reflection Test and pupil size. Eyetracking analysis of spatial planning in real-time shows that participants who score highly on independent measures of cognitive ability also plan further ahead. Taken together, these results converge on a theory of attributed intelligence as an evaluation of how efficiently an agent plans, such that depends on the observer's cognitive abilities to carry out the evaluation

    Dynamic Coverage Control and Estimation in Collaborative Networks of Human-Aerial/Space Co-Robots

    Full text link
    In this dissertation, the author presents a set of control, estimation, and decision making strategies to enable small unmanned aircraft systems and free-flying space robots to act as intelligent mobile wireless sensor networks. These agents are primarily tasked with gathering information from their environments in order to increase the situational awareness of both the network as well as human collaborators. This information is gathered through an abstract sensing model, a forward facing anisotropic spherical sector, which can be generalized to various sensing models through adjustment of its tuning parameters. First, a hybrid control strategy is derived whereby a team of unmanned aerial vehicles can dynamically cover (i.e., sweep their sensing footprints through all points of a domain over time) a designated airspace. These vehicles are assumed to have finite power resources; therefore, an agent deployment and scheduling protocol is proposed that allows for agents to return periodically to a charging station while covering the environment. Rules are also prescribed with respect to energy-aware domain partitioning and agent waypoint selection so as to distribute the coverage load across the network with increased priority on those agents whose remaining power supply is larger. This work is extended to consider the coverage of 2D manifolds embedded in 3D space that are subject to collision by stochastic intruders. Formal guarantees are provided with respect to collision avoidance, timely convergence upon charging stations, and timely interception of intruders by friendly agents. This chapter concludes with a case study in which a human acts as a dynamic coverage supervisor, i.e., they use hand gestures so as to direct the selection of regions which ought to be surveyed by the robot. Second, the concept of situational awareness is extended to networks consisting of humans working in close proximity with aerial or space robots. In this work, the robot acts as an assistant to a human attempting to complete a set of interdependent and spatially separated multitasking objectives. The human wears an augmented reality display and the robot must learn the human's task locations online and broadcast camera views of these tasks to the human. The locations of tasks are learned using a parallel implementation of expectation maximization of Gaussian mixture models. The selection of tasks from this learned set is executed by a Markov Decision Process which is trained using Q-learning by the human. This method for robot task selection is compared against a supervised method in IRB approved (HUM00145810) experimental trials with 24 human subjects. This dissertation concludes by discussing an additional case study, by the author, in Bayesian inferred path planning. In addition, open problems in dynamic coverage and human-robot interaction are discussed so as to present an avenue forward for future work.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155147/1/wbentz_1.pd
    corecore