714 research outputs found

    Reliable vision-guided grasping

    Get PDF
    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Automating endoscopic camera motion for teleoperated minimally invasive surgery using inverse reinforcement learning

    Get PDF
    During a laparoscopic surgery, an endoscopic camera is used to provide visual feedback of the surgery to the surgeon and is controlled by a skilled assisting surgeon or a nurse. However, in robot-assisted teleoperated systems such as the daVinci surgical system, the same control lies with the operating surgeons. This results in an added task of constantly changing view point of the endoscope which can be disruptive and also increase the cognitive load on the surgeons. The work presented in this thesis aims to provide an approach that results in an intelligent camera control for such systems using machine learning algorithms. A particular task of pick and place was selected to demonstrate this approach. To add a layer of intelligence to the endoscope, the task was classified into subtasks representing the intent of the user. Neural networks with long short term memory cells (LSTMs) were trained to classify the motion of the instruments in the subtasks and a policy was calculated for each subtask using inverse reinforcement learning (IRL). Since current surgical robots do not enable the movement of the camera and instruments simultaneously, an expert data set was unavailable that could be used to train the models. Hence, a user study was conducted in which the participants were asked to complete the task of picking and placing a ring on a peg in a 3-D immersive simulation environment created using CHAI libraries. A virtual reality headset, Oculus Rift, was used during the study to track the head movements of the users to obtain their view points while they performed the task. This was considered to be expert data and was used to train the algorithm to automate the endoscope motion. A 71.3% accuracy was obtained for the classification of the task into 4 subtasks and the inverse reinforcement learning resulted in an automated trajectory of the endoscope which was 94.7% similar to the human trajectories collected demonstrating that the approach provided in thesis can be used to automate endoscopic motion similar to a skilled assisting surgeon

    Intelligent strategies for mobile robotics in laboratory automation

    Get PDF
    In this thesis a new intelligent framework is presented for the mobile robots in laboratory automation, which includes: a new multi-floor indoor navigation method is presented and an intelligent multi-floor path planning is proposed; a new signal filtering method is presented for the robots to forecast their indoor coordinates; a new human feature based strategy is proposed for the robot-human smart collision avoidance; a new robot power forecasting method is proposed to decide a distributed transportation task; a new blind approach is presented for the arm manipulations for the robots

    Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction

    Full text link
    We present a novel approach for mobile manipulator self-calibration using contact information. Our method, based on point cloud registration, is applied to estimate the extrinsic transform between a fixed vision sensor mounted on a mobile base and an end effector. Beyond sensor calibration, we demonstrate that the method can be extended to include manipulator kinematic model parameters, which involves a non-rigid registration process. Our procedure uses on-board sensing exclusively and does not rely on any external measurement devices, fiducial markers, or calibration rigs. Further, it is fully automatic in the general case. We experimentally validate the proposed method on a custom mobile manipulator platform, and demonstrate centimetre-level post-calibration accuracy in positioning of the end effector using visual guidance only. We also discuss the stability properties of the registration algorithm, in order to determine the conditions under which calibration is possible.Comment: In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane, Australia, May 21-25, 201
    • …
    corecore