318 research outputs found
Vision-Guided State Estimation and Control of Robotic Manipulators Which Lack Proprioceptive Sensors
This paper presents a vision-based approach for estimating the configuration of, and providing control signals for, an under-sensored robot manipulator using a single monocular camera. Some remote manipulators, used for decommissioning tasks in the nuclear industry, lack proprioceptive sensors because electronics are vulnerable to radiation. Additionally, even if proprioceptive joint sensors could be retrofitted, such heavy-duty manipulators are often deployed on mobile vehicle platforms, which are significantly and erratically perturbed when powerful hydraulic drilling or cutting tools are deployed at the end-effector. In these scenarios, it would be beneficial to use external sensory information, e.g. vision, for estimating the robot configuration with respect to the scene or task. Conventional visual servoing methods typically rely on joint encoder values for controlling the robot. In contrast, our framework assumes that no joint encoders are available, and estimates the robot configuration by visually tracking several parts of the robot, and then enforcing equality between a set of transformation matrices which relate the frames of the camera, world and tracked robot parts. To accomplish this, we propose two alternative methods based on optimisation. We evaluate the performance of our developed framework by visually tracking the pose of a conventional robot arm, where the joint encoders are used to provide ground-truth for evaluating the precision of the vision system. Additionally, we evaluate the precision with which visual feedback can be used to control the robot's end-effector to follow a desired trajectory
Towards Advanced Robotic Manipulations for Nuclear Decommissioning
Despite enormous remote handling requirements, remarkably very few robots are being used by the nuclear industry. Most of the remote handling tasks are still performed manually, using conventional mechanical master‐slave devices. The few robotic manipulators deployed are directly tele‐operated in rudimentary ways, with almost no autonomy or even a pre‐programmed motion. In addition, majority of these robots are under‐sensored (i.e. with no proprioception), which prevents them to use for automatic tasks. In this context, primarily this chapter discusses the human operator performance in accomplishing heavy‐duty remote handling tasks in hazardous environments such as nuclear decommissioning. Multiple factors are evaluated to analyse the human operators’ performance and workload. Also, direct human tele‐operation is compared against human‐supervised semi‐autonomous control exploiting computer vision. Secondarily, a vision‐guided solution towards enabling advanced control and automating the under‐sensored robots is presented. Maintaining the coherence with real nuclear scenario, the experiments are conducted in the lab environment and results are discussed
A neural network-based exploratory learning and motor planning system for co-robots
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object
Autonomous vision-guided bi-manual grasping and manipulation
This paper describes the implementation, demonstration and evaluation of a variety of autonomous, vision-guided manipulation capabilities, using a dual-arm Baxter robot. Initially, symmetric coordinated bi-manual manipulation based on kinematic tracking algorithm was implemented on the robot to enable a master-slave manipulation system. We demonstrate the efficacy of this approach with a human-robot collaboration experiment, where a human operator moves the master arm along arbitrary trajectories and the slave arm automatically follows the master arm while maintaining a constant relative pose between the two end-effectors. Next, this concept was extended to perform dual-arm manipulation without human intervention. To this extent, an image-based visual servoing scheme has been developed to control the motion of arms for positioning them at a desired grasp locations. Next we combine this with a dynamic position controller to move the grasped object using both arms in a prescribed trajectory. The presented approach has been validated by performing numerous symmetric and asymmetric bi-manual manipulations at different conditions. Our experiments demonstrated 80% success rate in performing the symmetric dual-arm manipulation tasks; and 73% success rate in performing asymmetric dualarm manipulation tasks
Soft Robot-Assisted Minimally Invasive Surgery and Interventions: Advances and Outlook
Since the emergence of soft robotics around two decades ago, research interest in the field has escalated at a pace. It is fuelled by the industry's appreciation of the wide range of soft materials available that can be used to create highly dexterous robots with adaptability characteristics far beyond that which can be achieved with rigid component devices. The ability, inherent in soft robots, to compliantly adapt to the environment, has significantly sparked interest from the surgical robotics community. This article provides an in-depth overview of recent progress and outlines the remaining challenges in the development of soft robotics for minimally invasive surgery
Body models in humans, animals, and robots: mechanisms and plasticity
Humans and animals excel in combining information from multiple sensory
modalities, controlling their complex bodies, adapting to growth, failures, or
using tools. These capabilities are also highly desirable in robots. They are
displayed by machines to some extent - yet, as is so often the case, the
artificial creatures are lagging behind. The key foundation is an internal
representation of the body that the agent - human, animal, or robot - has
developed. In the biological realm, evidence has been accumulated by diverse
disciplines giving rise to the concepts of body image, body schema, and others.
In robotics, a model of the robot is an indispensable component that enables to
control the machine. In this article I compare the character of body
representations in biology with their robotic counterparts and relate that to
the differences in performance that we observe. I put forth a number of axes
regarding the nature of such body models: fixed vs. plastic, amodal vs. modal,
explicit vs. implicit, serial vs. parallel, modular vs. holistic, and
centralized vs. distributed. An interesting trend emerges: on many of the axes,
there is a sequence from robot body models, over body image, body schema, to
the body representation in lower animals like the octopus. In some sense,
robots have a lot in common with Ian Waterman - "the man who lost his body" -
in that they rely on an explicit, veridical body model (body image taken to the
extreme) and lack any implicit, multimodal representation (like the body
schema) of their bodies. I will then detail how robots can inform the
biological sciences dealing with body representations and finally, I will study
which of the features of the "body in the brain" should be transferred to
robots, giving rise to more adaptive and resilient, self-calibrating machines.Comment: 27 pages, 8 figure
On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation
Biological and robotic grasp and manipulation are undeniably similar at the
level of mechanical task performance. However, their underlying fundamental
biological vs. engineering mechanisms are, by definition, dramatically
different and can even be antithetical. Even our approach to each is
diametrically opposite: inductive science for the study of biological systems
vs. engineering synthesis for the design and construction of robotic systems.
The past 20 years have seen several conceptual advances in both fields and the
quest to unify them. Chief among them is the reluctant recognition that their
underlying fundamental mechanisms may actually share limited common ground,
while exhibiting many fundamental differences. This recognition is particularly
liberating because it allows us to resolve and move beyond multiple paradoxes
and contradictions that arose from the initial reasonable assumption of a large
common ground. Here, we begin by introducing the perspective of neuromechanics,
which emphasizes that real-world behavior emerges from the intimate
interactions among the physical structure of the system, the mechanical
requirements of a task, the feasible neural control actions to produce it, and
the ability of the neuromuscular system to adapt through interactions with the
environment. This allows us to articulate a succinct overview of a few salient
conceptual paradoxes and contradictions regarding under-determined vs.
over-determined mechanics, under- vs. over-actuated control, prescribed vs.
emergent function, learning vs. implementation vs. adaptation, prescriptive vs.
descriptive synergies, and optimal vs. habitual performance. We conclude by
presenting open questions and suggesting directions for future research. We
hope this frank assessment of the state-of-the-art will encourage and guide
these communities to continue to interact and make progress in these important
areas
Vision-based trajectory control of unsensored robots to increase functionality, without robot hardware modication
In nuclear decommissioning operations, very rugged remote manipulators are used, which lack proprioceptive joint angle sensors. Hence these machines are simply tele-operated, where a human operator controls each joint of the robot individually using a teach pendant or a set of switches. Moreover, decommissioning tasks often involve forceful interactions between the environment and powerful tools at the robot's end-effector. Such interactions can result in complex dynamics, large torques at the robot's joints, and can also lead to erratic movements of a mobile manipulator's base frame with respect to the task space. This Thesis seeks to address these problems by, firstly, showing how
the configuration of such robots can be tracked in real-time by a vision system and fed back into a trajectory control scheme. Secondly, the Thesis investigates the dynamics of robot-environment contacts, and proposes several control schemes for detecting, coping
with, and also exploiting such contacts. Several contributions are advanced in this Thesis. Specifically a control framework is presented which exploits the constraints arising at contact points to effectively reduce commanded torques to perform tasks; methods are advanced to estimate the constraints arising from contacts in a number of situations, using only kinematic quantities; a framework is proposed to estimate the
configuration of a manipulator using a single monocular camera; and finally, a general control framework is described which uses all of the above contributions to servo a manipulator. The results of a number of experiments are presented which demonstrate the feasibility of the proposed methods
- …