759 research outputs found
Two-Stage Transfer Learning for Heterogeneous Robot Detection and 3D Joint Position Estimation in a 2D Camera Image using CNN
Collaborative robots are becoming more common on factory floors as well as
regular environments, however, their safety still is not a fully solved issue.
Collision detection does not always perform as expected and collision avoidance
is still an active research area. Collision avoidance works well for fixed
robot-camera setups, however, if they are shifted around, Eye-to-Hand
calibration becomes invalid making it difficult to accurately run many of the
existing collision avoidance algorithms. We approach the problem by presenting
a stand-alone system capable of detecting the robot and estimating its
position, including individual joints, by using a simple 2D colour image as an
input, where no Eye-to-Hand calibration is needed. As an extension of previous
work, a two-stage transfer learning approach is used to re-train a
multi-objective convolutional neural network (CNN) to allow it to be used with
heterogeneous robot arms. Our method is capable of detecting the robot in
real-time and new robot types can be added by having significantly smaller
training datasets compared to the requirements of a fully trained network. We
present data collection approach, the structure of the multi-objective CNN, the
two-stage transfer learning training and test results by using real robots from
Universal Robots, Kuka, and Franka Emika. Eventually, we analyse possible
application areas of our method together with the possible improvements.Comment: 6+n pages, ICRA 2019 submissio
Robust visual servoing in 3d reaching tasks
This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications
Recommended from our members
Camera Placement Planning Avoiding Occlusion: Test Results Using a Robotic Hand/Eye System
Camera placement experiments are presented that demonstrate the effectiveness of a viewpoint planning algorithm that avoids occlusion of a visual target. A CCD camera mounted on a robot in a hand-eye configuration is placed at planned unobstructed viewpoints to observe a target on a real object. The validity of the method is tested by placing the camera inside the viewing region, that is constructed using the proposed new sensor placement planning algorithm and observing whether the target is truly visible. The accuracy of the boundary of the constructed viewing region is tested by placing the camera at the critical - locations of the viewing region boundary and confirming that the target is barely visible. The corresponding scenes from the candidate viewpoints are shown demonstrating that occlusions are properly avoided
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors
An implementation of a versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses
This thesis studies and implements a new versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses developed by Roger Tsai [1]. This technique builds up a unique relationship from the world coordinate system to the computer image coordinate system of calibration points by using a radial alignment constraint. The technique has advantage in terms of accuracy, speed, and versatility over existing techniques.
The fundamental knowledge for using this technique is presented in this thesis first, followed by an overview of the existing calibration techniques, and a detailed description of the new technique. The implementation is then presented step by step and is algorithm-oriented. Finally, the experimental results using real data are reported.
A precise calibration pattern, a CCD camera with zoom lens and a DADACUBE image acquisition system are used for the implementation of the calibration technique.
This thesis supplies the calibrated parameters for researchers who will use the CCD camera in their research, and may pave the way for future research in camera calibration
Body models in humans, animals, and robots: mechanisms and plasticity
Humans and animals excel in combining information from multiple sensory
modalities, controlling their complex bodies, adapting to growth, failures, or
using tools. These capabilities are also highly desirable in robots. They are
displayed by machines to some extent - yet, as is so often the case, the
artificial creatures are lagging behind. The key foundation is an internal
representation of the body that the agent - human, animal, or robot - has
developed. In the biological realm, evidence has been accumulated by diverse
disciplines giving rise to the concepts of body image, body schema, and others.
In robotics, a model of the robot is an indispensable component that enables to
control the machine. In this article I compare the character of body
representations in biology with their robotic counterparts and relate that to
the differences in performance that we observe. I put forth a number of axes
regarding the nature of such body models: fixed vs. plastic, amodal vs. modal,
explicit vs. implicit, serial vs. parallel, modular vs. holistic, and
centralized vs. distributed. An interesting trend emerges: on many of the axes,
there is a sequence from robot body models, over body image, body schema, to
the body representation in lower animals like the octopus. In some sense,
robots have a lot in common with Ian Waterman - "the man who lost his body" -
in that they rely on an explicit, veridical body model (body image taken to the
extreme) and lack any implicit, multimodal representation (like the body
schema) of their bodies. I will then detail how robots can inform the
biological sciences dealing with body representations and finally, I will study
which of the features of the "body in the brain" should be transferred to
robots, giving rise to more adaptive and resilient, self-calibrating machines.Comment: 27 pages, 8 figure
Reconstruction of Specular Reflective Surfaces using Auto-Calibrating Deflectometry
This thesis discusses deflectometry as a reconstruction method for highly reflecting surfaces. It focuses on deflectometry alone and does not use other reconstruction techniques to supplement with additional data. It explains the measurement process and principle and provides a crash course into an efficient mathematical representation of the principles involved. Using this, it reformulates existing three-dimensional reconstructing methods, expands upon them and develops new ones
- ā¦