3,946 research outputs found

    Control of free-flying space robot manipulator systems

    Get PDF
    New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail

    Perception Framework for Activities of Daily Living Manipulation Tasks

    Get PDF
    There is an increasing concern in tackling the problems faced by the elderly community and physically in-locked people to lead an independent life experience problems with self- care. The need for developing service robots that can help people with mobility impairments is hence very essential. Developing a control framework for shared human-robot autonomy will allow locked-in individuals to perform the Activities of Daily Living (ADL) in a exible way. The relevant ADL scenarios were identi ed as handling objects, self-feeding, and opening doors for indoor nav- igation assistance. Multiple experiments were conducted, which demonstrates that the robot executes these daily living tasks reliably without requiring adjustment to the environment. The indoor manipulation tasks hold the challenge of dealing with a wide range of unknown objects. This thesis presents a framework developed for grasping without requiring a priori knowledge of the objects being manipulated. A successful manipulation task requires the combination of aspects such as envi- ronment modeling, object detection with pose estimation, grasp planning, motion planning followed by an e?cient grasp execution, which is validated by a 6+2 Degree of Freedom robotic manipulator

    Reuleaux: Robot Base Placement by Reachability Analysis

    Full text link
    Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed-base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where position of the robot and work piece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. Solutions are not limited only to vertically constrained placement, since complicated robotics tasks require the base to be placed at unique poses based on task demand. All Reuleaux library methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.Comment: Submitted to International Conference of Robotic Computing 201

    Autonomous task-based grasping for mobile manipulators

    Get PDF
    A fully integrated grasping system for a mobile manipulator to grasp an unknown object of interest (OI) in an unknown environment is presented. The system autonomously scans its environment, models the OI, plans and executes a grasp, while taking into account base pose uncertainty and obstacles in its way to reach the object. Due to inherent line of sight limitations in sensing, a single scan of the OI often does not reveal enough information to complete grasp analysis; as a result, our system autonomously builds a model of an object via multiple scans from different locations until a grasp can be performed. A volumetric next-best-view (NBV) algorithm is used to model an arbitrary object and terminates modelling when grasp poses are discovered on a partially observed object. Two key sets of experiments are presented: i) modelling and registration error in the OI point cloud model is reduced by selecting viewpoints with more scan overlap, and ii) model construction and grasps are successfully achieved while experiencing base pose uncertainty. A generalized algorithm is presented to discover grasp pose solutions for multiple grasp types for a multi-fingered mechanical gripper using sensed point clouds. The algorithm introduces two key ideas: 1) a histogram of finger contact normals is used to represent a grasp “shape” to guide a gripper orientation search in a histogram of object(s) surface normals, and 2) voxel grid representations of gripper and object(s) are cross-correlated to match finger contact points, i.e. grasp “size”, to discover a grasp pose. Constraints, such as collisions with neighbouring objects, are incorporated in the cross-correlation computation. Simulations and preliminary experiments show that 1) grasp poses for three grasp types are found in near real-time, 2) grasp pose solutions are consistent with respect to voxel resolution changes for both partial and complete point cloud scans, 3) a planned grasp pose is executed with a mechanical gripper, and 4) grasp overlap is presented as a feature to identify regions on a partial object model ideal for object transfer or securing an object

    Motion planning and assembly for microassembly workstation

    Get PDF
    In general, mechatronics systems have no standard operating system that could be used for planning and control when these complex devices are running. The goal of this paper is to formulate a work platform that can be used as a method for obtaining precision in the manipulation of micro-entities using micro-scale manipulation tools for microsystem applications. This paper provide groundwork for motion planning and assembly of the Micro-Assembly Workstation (MAW) manipulation system. To demonstrate the feasibility of the idea, the paper implements some of the motion planning algorithms; it investigates the performance of the conventional Euclidean distance algorithm (EDA), artificial potential fields’ algorithm, and A* algorithm when implemented on a virtual space

    NASREN: Standard reference model for telerobot control

    Get PDF
    A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications
    corecore