244 research outputs found

    Affordance-based control of a variable-autonomy telerobot

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis. "September 2012."Includes bibliographical references (pages 37-38).Most robot platforms operate in one of two modes: full autonomy, usually in the lab; or low-level teleoperation, usually in the field. Full autonomy is currently realizable only in narrow domains of robotics-like mapping an environment. Tedious teleoperation/joystick control is typical in military applications, like complex manipulation and navigation with bomb-disposal robots. This thesis describes a robot "surrogate" with an intermediate and variable level of autonomy. The robot surrogate accomplishes manipulation tasks by taking guidance and planning suggestions from a human "supervisor." The surrogate does not engage in high-level reasoning, but only in intermediate-level planning and low-level control. The human supervisor supplies the high-level reasoning and some intermediate control-leaving execution details for the surrogate. The supervisor supplies world knowledge and planning suggestions by "drawing" on a 3D view of the world constructed from sensor data. The surrogate conveys its own model of the world to the supervisor, to enable mental-model sharing between supervisor and surrogate. The contributions of this thesis include: (1) A novel partitioning of the manipulation task load between supervisor and surrogate, which side-steps problems in autonomous robotics by replacing them with problems in interfaces, perception, planning, control, and human-robot trust; and (2) The algorithms and software designed and built for mental model-sharing and supervisor-assisted manipulation. Using this system, we are able to command the PR2 to manipulate simple objects incorporating either a single revolute or prismatic joint.by Michael Fleder.M. Eng

    Approaching the Symbol Grounding Problem with Probabilistic Graphical Models

    Get PDF
    In order for robots to engage in dialog with human teammates, they must have the ability to map between words in the language and aspects of the external world. A solution to this symbol grounding problem (Harnad, 1990) would enable a robot to interpret commands such as ā€œDrive over to receiving and pick up the tire pallet.ā€ In this article we describe several of our results that use probabilistic inference to address the symbol grounding problem. Our speciļ¬c approach is to develop models that factor according to the linguistic structure of a command. We ļ¬rst describe an early result, a generative model that factors according to the sequential structure of language, and then discuss our new framework, generalized grounding graphs (G3). The G3 framework dynamically instantiates a probabilistic graphical model for a natural language input, enabling a mapping between words in language and concrete objects, places, paths and events in the external world. We report on corpus-based experiments where the robot is able to learn and use word meanings in three real-world tasks: indoor navigation, spatial language video retrieval, and mobile manipulation.U.S. Army Research Laboratory. Collaborative Technology Alliance Program (Cooperative Agreement W911NF-10-2-0016)United States. Office of Naval Research (MURI N00014-07-1-0749

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Drawing on the World: sketch in context

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 109-111).This thesis introduces the idea that combining sketch recognition with contextual data-information about what is being drawn on-can improve the recognition of meaning in sketch and enrich the user interaction experience. I created a language called StepStool that facilitates the description of the relationship between digital ink and contextual data, and wrote the corresponding interpreter that enables my system to distinguish between gestural commands issued to an autonomous forklift. A user study was done to compare the correctness of a sketch interface with and without context on the canvas. This thesis coins the phrase "Drawing on the World" to mean contextual sketch recognition, describes the implementation and methodology behind "Drawing on the World", describes the forklift's interface, and discusses other possible uses for a contextual gesture recognizer. Sample code is provided that describes the specifics of the StepStool engine's implementation and the implementation of the forklift's interface.by Andrew Correa.S.M

    Transportation Management

    Get PDF

    The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots

    Get PDF
    Safe yet efficient operation of professional service robots within logistics or production in human-robot shared environments requires a flexible human-aware navigation stack. In this manuscript, we propose the ILIAD safety stack comprising software and hardware designed to achieve safe and efficient motion specifically for industrial vehicles with nontrivial kinematics The stack integrates five interconnected layers for autonomous motion planning and control to enable short- and long-term reasoning. The use-case scenario tested requires an autonomous industrial forklift to safely navigate among pick-and-place locations during normal daily activities involving human workers. Our test-bed in the real world consists of a three-day experiment in a food distribution warehouse. The evaluation is extended in simulation with an ablation study of the impact of different layers to show both the practical and the performance-related impact. The experimental results show a safer and more legible robot when humans are nearby with a trade-off in task efficiency, and that not all layers have the same degree of impact in the system
    • ā€¦
    corecore