2 research outputs found

    An Adjustable Autonomy Paradigm For Adapting To Expert-Novice Differences

    No full text
    Multi-robot manipulation tasks are challenging for robots to complete in an entirely autonomous way due to the perceptual and cognitive requirements of grasp planning, necessitating the development of specialized user interfaces. Yet even for humans, the task is sufficiently complex that a high level of performance variability exists between a novice and an expert\u27s ability to teleoperate the robots in a sufficiently tightly coupled fashion to manipulate objects without dropping them. The ultimate success of the task relies on the skill level of the human operator to manage and coordinate the robot team. Although most systems focus their effort on forging a unified connection between the robots and the operator, less attention has been spent on the problem of identifying and adapting to the human operator\u27s skill level. In this paper, we present a method for modeling the human operator and adjusting the autonomy levels of the robots based on the operator\u27s skill level. This added functionality serves as a crucial mechanism toward making human operators of any skill level a vital asset to the team even when their teleoperation performance is uneven. © 2013 IEEE

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control
    corecore