4,266 research outputs found
MoSS: Monocular Shape Sensing for Continuum Robots
Continuum robots are promising candidates for interactive tasks in medical
and industrial applications due to their unique shape, compliance, and
miniaturization capability. Accurate and real-time shape sensing is essential
for such tasks yet remains a challenge. Embedded shape sensing has high
hardware complexity and cost, while vision-based methods require stereo setup
and struggle to achieve real-time performance. This paper proposes the first
eye-to-hand monocular approach to continuum robot shape sensing. Utilizing a
deep encoder-decoder network, our method, MoSSNet, eliminates the computation
cost of stereo matching and reduces requirements on sensing hardware. In
particular, MoSSNet comprises an encoder and three parallel decoders to uncover
spatial, length, and contour information from a single RGB image, and then
obtains the 3D shape through curve fitting. A two-segment tendon-driven
continuum robot is used for data collection and testing, demonstrating accurate
(mean shape error of 0.91 mm, or 0.36% of robot length) and real-time (70 fps)
shape sensing on real-world data. Additionally, the method is optimized
end-to-end and does not require fiducial markers, manual segmentation, or
camera calibration. Code and datasets will be made available at
https://github.com/ContinuumRoboticsLab/MoSSNet.Comment: 8 pages, 6 figures, submitted to RA-
Assistance strategies for robotized laparoscopy
Robotizing laparoscopic surgery not only allows achieving better
accuracy to operate when a scale factor is applied between master and slave or thanks to the use of tools with 3 DoF, which cannot be used in conventional manual surgery, but also due to additional informatic support. Relying on computer assistance different strategies that facilitate the task of the surgeon can be incorporated, either in the form of autonomous navigation or cooperative guidance, providing sensory or visual feedback, or introducing certain limitations of movements. This paper describes different ways of assistance aimed at improving the work capacity of the surgeon and achieving more safety for the patient, and the results obtained with the prototype developed at UPC.Peer ReviewedPostprint (author's final draft
Robot Autonomy for Surgery
Autonomous surgery involves having surgical tasks performed by a robot
operating under its own will, with partial or no human involvement. There are
several important advantages of automation in surgery, which include increasing
precision of care due to sub-millimeter robot control, real-time utilization of
biosignals for interventional care, improvements to surgical efficiency and
execution, and computer-aided guidance under various medical imaging and
sensing modalities. While these methods may displace some tasks of surgical
teams and individual surgeons, they also present new capabilities in
interventions that are too difficult or go beyond the skills of a human. In
this chapter, we provide an overview of robot autonomy in commercial use and in
research, and present some of the challenges faced in developing autonomous
surgical robots
ViSE: Vision-Based 3D Online Shape Estimation of Continuously Deformable Robots
The precise control of soft and continuum robots requires knowledge of their
shape. The shape of these robots has, in contrast to classical rigid robots,
infinite degrees of freedom. To partially reconstruct the shape, proprioceptive
techniques use built-in sensors resulting in inaccurate results and increased
fabrication complexity. Exteroceptive methods so far rely on placing reflective
markers on all tracked components and triangulating their position using
multiple motion-tracking cameras. Tracking systems are expensive and infeasible
for deformable robots interacting with the environment due to marker occlusion
and damage. Here, we present a regression approach for 3D shape estimation
using a convolutional neural network. The proposed approach takes advantage of
data-driven supervised learning and is capable of real-time marker-less shape
estimation during inference. Two images of a robotic system are taken
simultaneously at 25 Hz from two different perspectives, and are fed to the
network, which returns for each pair the parameterized shape. The proposed
approach outperforms marker-less state-of-the-art methods by a maximum of 4.4%
in estimation accuracy while at the same time being more robust and requiring
no prior knowledge of the shape. The approach can be easily implemented due to
only requiring two color cameras without depth and not needing an explicit
calibration of the extrinsic parameters. Evaluations on two types of soft
robotic arms and a soft robotic fish demonstrate our method's accuracy and
versatility on highly deformable systems in real-time. The robust performance
of the approach against different scene modifications (camera alignment and
brightness) suggests its generalizability to a wider range of experimental
setups, which will benefit downstream tasks such as robotic grasping and
manipulation
Visual shape and position sensing algorithm for a continuum robot
Continuum robots represent an actively developing and fast-growing technology in robotics. To successfully implement control and path planning of continuum robots it is important to develop an accurate three-dimensional shape and position sensing algorithm. In this paper, we propose an algorithm for the three-dimensional reconstruction of the continuum robot shape. The algorithm is performed during several steps. Initially, images from two cameras are processed by applying pre-processing and segmentation techniques. Then, the gradient descent method is applied to compare twodimensional skeleton points of both masks. Having compared these points, it finds a skeleton of the robot in a threedimensional form. Additionally, the proposed algorithm is able to define key points using the distance from the robot base along the center line. The latter allows controlling the position of points of interest defined by a user. As a result, the developed algorithm achieved a relatively high level of accuracy and speed
A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation
The application of laser technologies in surgical interventions has been accepted in the clinical
domain due to their atraumatic properties. In addition to manual application of fibre-guided
lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours
has been prevailed in ENT surgery. However, TLM requires many years of surgical training
for tumour resection in order to preserve the function of adjacent organs and thus preserve the
patient’s quality of life. The positioning of the microscopic laser applicator outside the patient
can also impede a direct line-of-sight to the target area due to anatomical variability and limit
the working space. Further clinical challenges include positioning the laser focus on the tissue
surface, imaging, planning and performing laser ablation, and motion of the target area during
surgery. This dissertation aims to address the limitations of TLM through robotic approaches and
intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no
highly integrated platform for endoscopic delivery of focused laser radiation is available to date.
Likewise, there are no known devices that incorporate scene information from endoscopic imaging
into ablation planning and execution. For focusing of the laser beam close to the target tissue, this
work first presents miniaturised focusing optics that can be integrated into endoscopic systems.
Experimental trials characterise the optical properties and the ablation performance. A robotic
platform is realised for manipulation of the focusing optics. This is based on a variable-length
continuum manipulator. The latter enables movements of the endoscopic end effector in five
degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the
robot are integrated into a modular framework that is evaluated experimentally. The manipulation
of focused laser radiation also requires precise adjustment of the focal position on the tissue. For
this purpose, visual, haptic and visual-haptic assistance functions are presented. These support
the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic
assistance are demonstrated in a user study. The system performance and usability of the overall
robotic system are assessed in an additional user study. Analogous to a clinical scenario, the
subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the
spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser
ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact
laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergeführten
Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von
Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion
jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu
sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet
durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische
Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die
Bildgebung, die Planung und Ausführung der Laserablation sowie intraoperative Bewegungen
des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch
robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal
invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen für die endoskopische
Applikation fokussierter Laserstrahlung verfügbar. Ebenfalls sind keine Systeme bekannt, die
Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausführung
einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst
eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur
Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem
längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer
mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fünf Freiheitsgraden.
Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework
eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine
präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation
zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der
visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit
des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu
einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die
mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation
werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen
einen positiven Effekt der Automationskonzepte für die kontaktfreie Laserchirurgie
- …