23,984 research outputs found
Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach
Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.Peer ReviewedPostprint (author's final draft
A Robotic CAD System using a Bayesian Framework
We present in this paper a Bayesian CAD system
for robotic applications. We address the problem of the
propagation of geometric uncertainties and how esian
CAD system for robotic applications. We address the
problem of the propagation of geometric uncertainties
and how to take this propagation into account when
solving inverse problems. We describe the methodology
we use to represent and handle uncertainties using
probability distributions on the system's parameters
and sensor measurements. It may be seen as a
generalization of constraint-based approaches where we
express a constraint as a probability distribution instead
of a simple equality or inequality. Appropriate
numerical algorithms used to apply this methodology
are also described. Using an example, we show how
to apply our approach by providing simulation results
using our CAD system
Assistance strategies for robotized laparoscopy
Robotizing laparoscopic surgery not only allows achieving better
accuracy to operate when a scale factor is applied between master and slave or thanks to the use of tools with 3 DoF, which cannot be used in conventional manual surgery, but also due to additional informatic support. Relying on computer assistance different strategies that facilitate the task of the surgeon can be incorporated, either in the form of autonomous navigation or cooperative guidance, providing sensory or visual feedback, or introducing certain limitations of movements. This paper describes different ways of assistance aimed at improving the work capacity of the surgeon and achieving more safety for the patient, and the results obtained with the prototype developed at UPC.Peer ReviewedPostprint (author's final draft
The Design and Implementation of a Bayesian CAD Modeler for Robotic Applications
We present a Bayesian CAD modeler for robotic applications. We address the problem of taking into account the propagation of geometric uncertainties when solving inverse geometric problems. The proposed method may be seen as a generalization of constraint-based approaches in which we explicitly model geometric uncertainties. Using our methodology, a geometric constraint is expressed as a probability distribution on the system parameters and the sensor measurements, instead of a simple equality or inequality. To solve geometric problems in this framework, we propose an original resolution method able to adapt to problem complexity.
Using two examples, we show how to apply our approach by providing simulation results using our modeler
Vision-based interface applied to assistive robots
This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.Fil: PĂ©rez Berenguer, MarĂa Elisa. Universidad Nacional de San Juan. Facultad de IngenierĂa. Departamento de ElectrĂłnica y Automática. Gabinete de TecnologĂa MĂ©dica; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; ArgentinaFil: Soria, Carlos Miguel. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de IngenierĂa. Instituto de Automática; ArgentinaFil: LĂłpez Celani, Natalia Martina. Universidad Nacional de San Juan. Facultad de IngenierĂa. Departamento de ElectrĂłnica y Automática. Gabinete de TecnologĂa MĂ©dica; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; ArgentinaFil: Nasisi, Oscar Herminio. Universidad Nacional de San Juan. Facultad de IngenierĂa. Instituto de Automática; ArgentinaFil: Mut, Vicente Antonio. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de IngenierĂa. Instituto de Automática; Argentin
OLT: A Toolkit for Object Labeling Applied to Robotic RGB-D Datasets
In this work we present the Object Labeling Toolkit
(OLT), a set of software components publicly available for
helping in the management and labeling of sequential RGB-D
observations collected by a mobile robot. Such a robot can be
equipped with an arbitrary number of RGB-D devices, possibly
integrating other sensors (e.g. odometry, 2D laser scanners,
etc.). OLT first merges the robot observations to generate a
3D reconstruction of the scene from which object segmentation
and labeling is conveniently accomplished. The annotated labels
are automatically propagated by the toolkit to each RGB-D
observation in the collected sequence, providing a dense labeling
of both intensity and depth images. The resulting objects’ labels
can be exploited for many robotic oriented applications, including
high-level decision making, semantic mapping, or contextual
object recognition. Software components within OLT are highly
customizable and expandable, facilitating the integration of
already-developed algorithms. To illustrate the toolkit suitability,
we describe its application to robotic RGB-D sequences taken in
a home environment.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech. Spanish grant pro-
gram FPU-MICINN 2010 and the Spanish projects TAROTH:
New developments toward a Robot at Home (DPI2011-25483)
and PROMOVE: Advances in mobile robotics for promoting
independent life of elders (DPI2014-55826-R
- …