39,275 research outputs found

    Sim2Real View Invariant Visual Servoing by Recurrent Control

    Full text link
    Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints and angles, even in the presence of optical distortions. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we study how viewpoint-invariant visual servoing skills can be learned automatically in a robotic manipulation scenario. To this end, we train a deep recurrent controller that can automatically determine which actions move the end-point of a robotic arm to a desired object. The problem that must be solved by this controller is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing system must use its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to most visual servoing methods, which either assume known dynamics or require a calibration phase. We show how we can learn this recurrent controller using simulated data and a reinforcement learning objective. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video: https://fsadeghi.github.io/Sim2RealViewInvariantServ

    DESIGN, DEVELOPMENT, AND EVALUATION OF A MRI-GUIDED NEUROSURGICAL INTRACRANIAL ROBOT

    Get PDF
    Brain tumors are among the most feared complications of cancer. Their treatment is challenging because of the lack of good imaging modality and the inability to remove the complete tumor. To overcome this limitation, we propose to develop a Magnetic Resonance Imaging (MRI)-compatible neurosurgical robot. The robot can be operated under continuous MRI, and the Magnetic Resonance (MR) images can be used to supplement physicians' visual capabilities, resulting in precise tumor removal. We have developed two prototypes of the Minimally Invasive Neurosurgical Intracranial Robot (MINIR) using MRI compatible materials and shape memory alloy (SMA) actuators. The major difference between the two robots is that one uses SMA wire actuators and the other uses SMA spring actuators combined with the tendon-sheath mechanism. Due to space limitation inside the robot body and the strong magnetic field in the MRI scanner, most sensors cannot be used inside the robot body. Hence, one possible approach is to rely on image feedback to control the motion of the robot. In this research, as a preliminary approach, we have relied on image feedback from a camera to control the motion of the robot. Since the image tracking algorithm may fail in some situations, we also developed a temperature feedback control scheme which served as a backup controller for the robot. Experimental results demonstrated that both image feedback and temperature feedback can be used reliably to control the joint motion of the robots. A series of MRI compatibility tests were performed to evaluate the MRI compatibility of the robots and to assess the degradation in image quality. The experimental results demonstrated that the robots are MRI compatible and created no significant image distortion in the MR images during actuation. The accomplishments presented in this dissertation represent a significant development of using SMA actuators to actuate MRI-compatible robots. It is anticipated that, in the future, continuous MR imaging would be used reliably to control the motion of the robot. It is aspired that the robot design and the control methods of SMA actuators developed in this research can be utilized in practical applications

    A mosaic of eyes

    Get PDF
    Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties

    "Sticky Hands": learning and generalization for cooperative physical interactions with a humanoid robot

    Get PDF
    "Sticky Hands" is a physical game for two people involving gentle contact with the hands. The aim is to develop relaxed and elegant motion together, achieve physical sensitivity-improving reactions, and experience an interaction at an intimate yet comfortable level for spiritual development and physical relaxation. We developed a control system for a humanoid robot allowing it to play Sticky Hands with a human partner. We present a real implementation including a physical system, robot control, and a motion learning algorithm based on a generalizable intelligent system capable itself of generalizing observed trajectories' translation, orientation, scale and velocity to new data, operating with scalable speed and storage efficiency bounds, and coping with contact trajectories that evolve over time. Our robot control is capable of physical cooperation in a force domain, using minimal sensor input. We analyze robot-human interaction and relate characteristics of our motion learning algorithm with recorded motion profiles. We discuss our results in the context of realistic motion generation and present a theoretical discussion of stylistic and affective motion generation based on, and motivating cross-disciplinary research in computer graphics, human motion production and motion perception
    corecore