913 research outputs found

    Force/torque and tactile sensors for sensor-based manipulator control

    Get PDF
    The autonomy of manipulators, in space and in industrial environments, can be dramatically enhanced by the use of force/torque and tactile sensors. The development and future use of a six-component force/torque sensor for the Hermes Robot Arm (HERA) Basic End-Effector (BEE) is discussed. Then a multifunctional gripper system based on tactile sensors is described. The basic transducing element of the sensor is a sheet of pressure-sensitive polymer. Tactile image processing algorithms for slip detection, object position estimation, and object recognition are described

    Bilevel shared control for teleoperators

    Get PDF
    A shared system is disclosed for robot control including integration of the human and autonomous input modalities for an improved control. Autonomously planned motion trajectories are modified by a teleoperator to track unmodelled target motions, while nominal teleoperator motions are modified through compliance to accommodate geometric errors autonomously in the latter. A hierarchical shared system intelligently shares control over a remote robot between the autonomous and teleoperative portions of an overall control system. Architecture is hierarchical, and consists of two levels. The top level represents the task level, while the bottom, the execution level. In space applications, the performance of pure teleoperation systems depend significantly on the communication time delays between the local and the remote sites. Selection/mixing matrices are provided with entries which reflect how each input's signals modality is weighted. The shared control minimizes the detrimental effects caused by these time delays between earth and space

    A framework for compliant physical interaction : the grasp meets the task

    Get PDF
    Although the grasp-task interplay in our daily life is unquestionable, very little research has addressed this problem in robotics. In order to fill the gap between the grasp and the task, we adopt the most successful approaches to grasp and task specification, and extend them with additional elements that allow to define a grasp-task link. We propose a global sensor-based framework for the specification and robust control of physical interaction tasks, where the grasp and the task are jointly considered on the basis of the task frame formalism and the knowledge-based approach to grasping. A physical interaction task planner is also presented, based on the new concept of task-oriented hand pre-shapes. The planner focuses on manipulation of articulated parts in home environments, and is able to specify automatically all the elements of a physical interaction task required by the proposed framework. Finally, several applications are described, showing the versatility of the proposed approach, and its suitability for the fast implementation of robust physical interaction tasks in very different robotic systems

    Interactive Force Control Based on Multimodal Robot Skin for Physical Human-Robot Collaboration

    Get PDF
    This work proposes and realizes a control architecture that can support the deployment of a large-scale robot skin in a Human-Robot Collaboration scenario. It is shown, how whole-body tactile feedback can extend the capabilities of robots during dynamic interactions by providing information about multiple contacts across the robot\u27s surface. Specifically, an uncalibrated skin system is used to implement stable force control while simultaneously handling the multi-contact interactions of a user. The system formulates control tasks for force control, tactile guidance, collision avoidance, and compliance, and fuses them with a multi-priority redundancy resolution strategy. The approach is evaluated on an omnidirectional mobile-manipulator with dual arms covered with robot skin. Results are assessed under dynamic conditions, showing that multi-modal tactile information enables robust force control while at the same time remaining responsive to a user\u27s interactions

    The UJI librarian robot

    Get PDF
    This paper describes the UJI Librarian Robot, a mobile manipulator that is able to autonomously locate a book in an ordinary library, and grasp it from a bookshelf, by using eye-in-hand stereo vision and force sensing. The robot is only provided with the book code, a library map and some knowledge about its logical structure and takes advantage of the spatio-temporal constraints and regularities of the environment by applying disparate techniques such as stereo vision, visual tracking, probabilistic matching, motion estimation, multisensor-based grasping, visual servoing and hybrid control, in such a way that it exhibits a robust and dependable performance. The system has been tested, and experimental results show how it is able to robustly locate and grasp a book in a reasonable time without human intervention

    Reaction Null Space of a multibody system with applications in robotics

    Get PDF
    This paper provides an overview of implementation examples based on the Reaction Null Space formalism, developed initially to tackle the problem of satellite-base disturbance of a free-floating space robot, when the robot arm is activated. The method has been applied throughout the years to other unfixed-base systems, e.g. flexible-base and macro/mini robot systems, as well as to the balance control problem of humanoid robots. The paper also includes most recent results about complete dynamical decoupling of the end-link of a fixed-base robot, wherein the end-link is regarded as the unfixed-base. This interpretation is shown to be useful with regard to motion/force control scenarios. Respective implementation results are provided
    • …
    corecore