3,019 research outputs found
ViSE: Vision-Based 3D Online Shape Estimation of Continuously Deformable Robots
The precise control of soft and continuum robots requires knowledge of their
shape. The shape of these robots has, in contrast to classical rigid robots,
infinite degrees of freedom. To partially reconstruct the shape, proprioceptive
techniques use built-in sensors resulting in inaccurate results and increased
fabrication complexity. Exteroceptive methods so far rely on placing reflective
markers on all tracked components and triangulating their position using
multiple motion-tracking cameras. Tracking systems are expensive and infeasible
for deformable robots interacting with the environment due to marker occlusion
and damage. Here, we present a regression approach for 3D shape estimation
using a convolutional neural network. The proposed approach takes advantage of
data-driven supervised learning and is capable of real-time marker-less shape
estimation during inference. Two images of a robotic system are taken
simultaneously at 25 Hz from two different perspectives, and are fed to the
network, which returns for each pair the parameterized shape. The proposed
approach outperforms marker-less state-of-the-art methods by a maximum of 4.4%
in estimation accuracy while at the same time being more robust and requiring
no prior knowledge of the shape. The approach can be easily implemented due to
only requiring two color cameras without depth and not needing an explicit
calibration of the extrinsic parameters. Evaluations on two types of soft
robotic arms and a soft robotic fish demonstrate our method's accuracy and
versatility on highly deformable systems in real-time. The robust performance
of the approach against different scene modifications (camera alignment and
brightness) suggests its generalizability to a wider range of experimental
setups, which will benefit downstream tasks such as robotic grasping and
manipulation
A learning algorithm for visual pose estimation of continuum robots
Continuum robots offer significant advantages for surgical intervention due to their down-scalability, dexterity, and structural flexibility. While structural compliance offers a passive way to guard against trauma, it necessitates robust methods for online estimation of the robot configuration in order to enable precise position and manipulation control. In this paper, we address the pose estimation problem by applying a novel mapping of the robot configuration to a feature descriptor space using stereo vision. We generate a mapping of known features through a supervised learning algorithm that relates the feature descriptor to known ground truth. Features are represented in a reduced sub-space, which we call eigen-features. The descriptor provides some robustness to occlusions, which are inherent to surgical environments, and the methodology that we describe can be applied to multi-segment continuum robots for closed-loop control. Experimental validation on a single-segment continuum robot demonstrates the robustness and efficacy of the algorithm for configuration estimation. Results show that the errors are in the range of 1°
On Model Adaptation for Sensorimotor Control of Robots
International audienceIn this expository article, we address the problem of computing adaptive models that can be used for guiding the motion of robotic systems with uncertain action-to-perception relations. The formulation of the uncalibrated sensor-based control problem is first presented, then, various methods for building adaptive sensorimotor models are derived and analysed. Finally, the proposed methodology is exemplified with two cases of study
Active haptic perception in robots: a review
In the past few years a new scenario for robot-based applications has emerged. Service
and mobile robots have opened new market niches. Also, new frameworks for shop-floor
robot applications have been developed. In all these contexts, robots are requested to
perform tasks within open-ended conditions, possibly dynamically varying. These new
requirements ask also for a change of paradigm in the design of robots: on-line and safe
feedback motion control becomes the core of modern robot systems. Future robots will
learn autonomously, interact safely and possess qualities like self-maintenance. Attaining
these features would have been relatively easy if a complete model of the environment
was available, and if the robot actuators could execute motion commands perfectly
relative to this model. Unfortunately, a complete world model is not available and robots
have to plan and execute the tasks in the presence of environmental uncertainties which
makes sensing an important component of new generation robots. For this reason,
today\u2019s new generation robots are equipped with more and more sensing components,
and consequently they are ready to actively deal with the high complexity of the real
world. Complex sensorimotor tasks such as exploration require coordination between the
motor system and the sensory feedback. For robot control purposes, sensory feedback
should be adequately organized in terms of relevant features and the associated data
representation. In this paper, we propose an overall functional picture linking sensing
to action in closed-loop sensorimotor control of robots for touch (hands, fingers). Basic
qualities of haptic perception in humans inspire the models and categories comprising the
proposed classification. The objective is to provide a reasoned, principled perspective on
the connections between different taxonomies used in the Robotics and human haptic
literature. The specific case of active exploration is chosen to ground interesting use
cases. Two reasons motivate this choice. First, in the literature on haptics, exploration has
been treated only to a limited extent compared to grasping and manipulation. Second,
exploration involves specific robot behaviors that exploit distributed and heterogeneous
sensory data
Combining Differential Kinematics and Optical Flow for Automatic Labeling of Continuum Robots in Minimally Invasive Surgery
International audienceThe segmentation of continuum robots in medical images can be of interest for analyzing surgical procedures or for controlling them. However, the automatic segmentation of continuous and flexible shapes is not an easy task. On one hand conventional approaches are not adapted to the specificities of these instruments, such as imprecise kinematic models, and on the other hand techniques based on deep-learning showed interesting capabilities but need many manually labeled images. In this article we propose a novel approach for segmenting continuum robots on endoscopic images, which requires no prior on the instrument visual appearance and no manual annotation of images. The method relies on the use of the combination of kinematic models and differential kinematic models of the robot and the analysis of optical flow in the images. A cost function aggregating information from the acquired image, from optical flow and from robot encoders is optimized using particle swarm optimization and provides estimated parameters of the pose of the continuum instrument and a mask defining the instrument in the image. In addition a temporal consistency is assessed in order to improve stochastic optimization and reject outliers. The proposed approach has been tested for the robotic instruments of a flexible endoscopy platform both for benchtop acquisitions and an in vivo video. The results show the ability of the technique to correctly segment the instruments without a prior, and in challenging conditions. The obtained segmentation can be used for several applications, for instance for providing automatic labels for machine learning techniques
Computing Pressure-Deformation Maps for Braided Continuum Robots
This paper presents a method for computing sensorimotor maps of braided continuum robots driven by pneumatic actuators. The method automatically creates a lattice-like representation of the sensorimotor map that preserves the topology of the input space by arranging its nodes into clusters of related data. Deformation trajectories can be simply represented with adjacent nodes whose values smoothly change along the lattice curve; this facilitates the computation of controls and the prediction of deformations in systems with unknown mechanical properties. The proposed model has an adaptive structure that can recalibrate to cope with changes in the mechanism or actuators. An experimental study with a robotic prototype is conducted to validate the proposed method
Recommended from our members
Soft Morphological Computation
Soft Robotics is a relatively new area of research, where progress in material science has powered the next generation of robots, exhibiting biological-like properties such as soft/elastic tissues, compliance, resilience and more besides. One of the issues when employing soft robotics technologies is the soft nature of the interactions arising between the robot and its environment. These interactions are complex, and the their dynamics are non-linear and hard to capture with known models. In this thesis we argue that complex soft interactions
can actually be beneficial to the robot, and give rise to rich stimuli which can be used for the resolution of robot tasks. We further argue that the usefulness of these interactions depends on statistical regularities, or structure, that appear in the stimuli. To this end, robots should appropriately employ their morphology and their actions, to influence the system-environment interactions such that structure can arise in the stimuli. In this thesis we show that learning processes can be used to perform such a task. Following this rationale, this thesis proposes and supports the theory of Soft Morphological Computation (SoMComp), by which a soft robot should appropriately condition, or ‘affect’, the soft interactions to improve the quality of the physical stimuli arising from it. SoMComp is composed of four main principles, i.e.: Soft Proprioception, Soft Sensing, Soft Morphology and Soft Actuation. Each of these principles is explored in the context of haptic object recognition or object handling in soft robots. Finally, this thesis provides an overview of this research and its future directions.AHDB CP17
- …