48 research outputs found
Off-the-shelf bin picking workcell with visual pose estimation: A case study on the world robot summit 2018 kitting task
The World Robot Summit 2018 Assembly Challenge included four different tasks.
The kitting task, which required bin-picking, was the task in which the fewest
points were obtained. However, bin-picking is a vital skill that can
significantly increase the flexibility of robotic set-ups, and is, therefore,
an important research field. In recent years advancements have been made in
sensor technology and pose estimation algorithms. These advancements allow for
better performance when performing visual pose estimation.
This paper shows that by utilizing new vision sensors and pose estimation
algorithms pose estimation in bins can be performed successfully. We also
implement a workcell for bin picking along with a force based grasping approach
to perform the complete bin picking. Our set-up is tested on the World Robot
Summit 2018 Assembly Challenge and successfully obtains a higher score compared
with all teams at the competition. This demonstrate that current technology can
perform bin-picking at a much higher level compared with previous results.Comment: 7 pages, 7 figures, 2 table
Supervised and unsupervised learning in vision-guided robotic bin picking applications for mixed-model assembly
Mixed-model assembly usually involves numerous component variants that require effective materials supply. Here, picking activities are often performed manually, but the prospect of robotics for bin picking has potential to improve quality while reducing man-hour consumption. Robots can make use of vision systems to learn how to perform their tasks. This paper aims to understand the differences in two learning approaches, supervised learning, and unsupervised learning. An experiment containing engineering preparation time (EPT) and recognition quality (RQ) is performed. The findings show an improved RQ but longer EPT with a supervised compared to an unsupervised approach
Learning to Dexterously Pick or Separate Tangled-Prone Objects for Industrial Bin Picking
Industrial bin picking for tangled-prone objects requires the robot to either
pick up untangled objects or perform separation manipulation when the bin
contains no isolated objects. The robot must be able to flexibly perform
appropriate actions based on the current observation. It is challenging due to
high occlusion in the clutter, elusive entanglement phenomena, and the need for
skilled manipulation planning. In this paper, we propose an autonomous,
effective and general approach for picking up tangled-prone objects for
industrial bin picking. First, we learn PickNet - a network that maps the
visual observation to pixel-wise possibilities of picking isolated objects or
separating tangled objects and infers the corresponding grasp. Then, we propose
two effective separation strategies: Dropping the entangled objects into a
buffer bin to reduce the degree of entanglement; Pulling to separate the
entangled objects in the buffer bin planned by PullNet - a network that
predicts position and direction for pulling from visual input. To efficiently
collect data for training PickNet and PullNet, we embrace the self-supervised
learning paradigm using an algorithmic supervisor in a physics simulator.
Real-world experiments show that our policy can dexterously pick up
tangled-prone objects with success rates of 90%. We further demonstrate the
generalization of our policy by picking a set of unseen objects. Supplementary
material, code, and videos can be found at https://xinyiz0931.github.io/tangle.Comment: 8 page
Semi-Autonomous Behaviour Tree-Based Framework for Sorting Electric Vehicle Batteries Components
The process of recycling electric vehicle (EV) batteries currently represents a significant challenge to the waste management automation industry. One example of it is the necessity of removing and sorting dismantled components from EV battery pack. This paper proposes a novel framework to semi-automate the process of removing and sorting different objects from an EV battery pack using a mobile manipulator. The work exploits the Behaviour Trees model for cognitive task execution and monitoring, which links different robot capabilities such as navigation, object tracking and motion planning in a modular fashion. The framework was tested in simulation, in both static and dynamic environments, and it was evaluated based on task time and the number of objects that the robot successfully placed in the respective containers. Results suggested that the robot’s success rate in accomplishing the task of sorting the battery components was 95% and 82% in static and dynamic environments, respectively
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution