3,266 research outputs found
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
Markerless visual servoing on unknown objects for humanoid robot platforms
To precisely reach for an object with a humanoid robot, it is of central
importance to have good knowledge of both end-effector, object pose and shape.
In this work we propose a framework for markerless visual servoing on unknown
objects, which is divided in four main parts: I) a least-squares minimization
problem is formulated to find the volume of the object graspable by the robot's
hand using its stereo vision; II) a recursive Bayesian filtering technique,
based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose
(position and orientation) of the robot's end-effector without the use of
markers; III) a nonlinear constrained optimization problem is formulated to
compute the desired graspable pose about the object; IV) an image-based visual
servo control commands the robot's end-effector toward the desired pose. We
demonstrate effectiveness and robustness of our approach with extensive
experiments on the iCub humanoid robot platform, achieving real-time
computation, smooth trajectories and sub-pixel precisions
High-precision grasping and placing for mobile robots
This work presents a manipulation system for multiple labware in life science laboratories using the H20 mobile robots. The H20 robot is equipped with the Kinect V2 sensor to identify and estimate the position of the required labware on the workbench. The local features recognition based on SURF algorithm is used. The recognition process is performed for the labware to be grasped and for the workbench holder. Different grippers and labware containers are designed to manipulate different weights of labware and to realize a safe transportation
Improving Rigid 3-D Calibration for Robotic Surgery
Autonomy is the next frontier of research in robotic surgery and its aim is to improve the quality of surgical procedures in the next future. One fundamental requirement for autonomy is advanced perception capability through vision sensors. In this article, we propose a novel calibration technique for a surgical scenario with a da Vinci Research Kit (dVRK) robot. Camera and robotic arms calibration are necessary to precise position and emulate expert surgeon. The novel calibration technique is tailored for RGB-D cameras. Different tests performed on relevant use cases prove that we significantly improve precision and accuracy with respect to state of the art solutions for similar devices on a surgical-size setups. Moreover, our calibration method can be easily extended to standard surgical endoscope used in real surgical scenario
Near-Minimum Time Visual Servo Control Of An Underactuated Robotic Arm
In industrial robotics, grasping an object is required to happen fast since the position and orientation of such an object is a-priori known. However, if such information about the position and orientation is unavailable and objects are spread randomly on a conveyor, it may be challenging to keep the dexterity and speed at which the task is carried out. Nowadays, the use of vision sensors to compute the position and orientation of an object and to reposition the robotic system is being used accordingly. This technology has indirectly introduced a disparity in time that varies according to the nature of the control technique
Medical SLAM in an autonomous robotic system
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects
- …