research articlethesis

Computer Vision for Robotics: Feature Matching, Pose Estimation and Safe Human-Robot Collaboration

Abstract

This thesis studies computer vision and its applications in robotics. In particular, the thesis contributions are divided into three main categories: 1) object class matching, 2) 6D pose estimation and 3) Human-Robot Collaboration (HRC). For decades, the 2D local image features have been applied to find robust matches between two images of the same scene or object. In the first part of the thesis, these settings are extended to class-level matching, where the primary target is to find correct matches between object instances from the same class (e.g. Harley-Davidson and scooter fromthe motorcycle class). The current benchmark is modified to the class matching setting and state-of-the-art detectors and descriptors are evaluated on multiple image datasets. As a main finding from the experiments, the performance of the 2D local features on class matching settings is poor and specialized approaches are needed. In the second part, the local features are extended to 6D pose estimation where the 3D feature correspondences are used to fully localize the target object from the sensor input, i.e. to give its 3D position and 3D orientation. For finding reliable correspondences, two robustifying methods are proposed that exploit the input object surface geometry and remove unreliable surface regions. Based on the experiments, the relatively simple algorithms were able to improve the accuracy of several pose estimation methods. As a second study on the pose estimation category, the existing evaluation metrics for measuring the qualitative performance of an estimated pose are assessed. As a results, we proposed a novel evaluation metric which extends the current practices from geometrical verification to a statistical formulation of the task success probability given an estimated object pose. The metric was found to be more realistic for validating the estimated pose for a given manipulation task compared to prior art. The final contributions are related to HRC which is a part of the next big industrial revolution, called Industry 4.0. The shift means breaking the existing safety practices in industrial manufacturing, i.e. removing the safety fences around the robot and bringing the human operator to work in close proximity of the robot. This requires novel safety solutions that can prevent collisions between the co-workers while still allowing flexible collaboration. To address the requirements, a safety model for HRC is proposed and experimentally evaluated on two different assembly tasks. The results verify the potential of human-robot teams to be more efficient solution for industrial manufacturing than the current working methods. As a final study, usefulness and readiness level of augmented reality-based (AR-based) techniques as an user-interface medium in manufacturing tasks is evaluated. The results indicate that AR-based interaction can support and instruct the operator, making him feel more comfortable and productive during the complex manufacturing tasks

Similar works

Full text

thumbnail-image

TamPub Julkaisuarkisto - TamPub Institutional Repository

redirect
Last time updated on 05/01/2021

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.