3 research outputs found

    Collaborative Motion Planning

    Get PDF
    Planning motion is an essential component for any autonomous robotic system. An intelligent agent must be able to efficiently plan collision-free paths in order to move through its world. Despite its importance, this problem is PSPACE-Hard which means that even planning motions for simple robots is computationally difficult. State-of-the-art approaches trade completeness (always able to provide a solution if one exists or report none exists) for probabilistic completeness (probabilistically guaranteed to find a solution if one exists but cannot report if none exists) and improved efficiency. These methods use sampling-based techniques to design a sequence of motions for the robot. However, as these methods are random in nature, the probability of their success is directly related to the expansiveness, or openness, of the underlying planning space. In other words, narrow passages, complex systems, and various constraints make planning with these methods difficult. On the other hand, humans can often determine approximate solutions for these difficult solutions quickly. In this research, we explore user-guided planning in which a human operator works together with a sampling-based motion planner. By having a human-in-the-loop, a human can steer a sampling-based planner towards a solution. This strategy can provide benefits to many applications such as computer-aided design and virtual prototyping, to name a few. We begin by classifying and creating simple models of common user-guided and heuristic-guided motion planning methods. Our models encompass three forms of user input: configuration-based, path-based, and region-based input. We compare and contrast these approaches and motivate our choice of a region-based collaborative framework. Through this analysis, we gain insight into user-guided planning and further motivate methods that harness low interface complexity and work entirely in workspace, which is most natural to a human operator. Further, we extend the theory of expansiveness to analyze the various types of user inputs. Our novel region-based collaboration framework takes advantage of human intuition by allowing a user to define regions in the workspace to bias and/or constrain the search space of a sampling-based motion planner. This approach allows a user to bias a high dimensional search with low dimensional input, supports intermittent user hints, and empowers a user to customize motion solutions. Finally, we extend region steering to both non-holonomic robotic systems and a human-inspired approach to motion planning. Our results show that this region-based framework can aid many variants of sampling-based planning, reduce computation time, support solution customization, and can be used to develop advanced heuristic methods for solving motion planning problems. We provide experiments exemplifying our approach in planning motions for complex robotic applications such as mobile manipulators, car-like, and free-flying robots

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control
    corecore