3 research outputs found

    Computational Vision and Active Perception

    Get PDF
    For service robots operating in domestic environments it is not enough to consider only control level robustness- it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. This paper presents an effort towards development of robust visual techniques used to guide robots in various tasks. Given a task at hand, we argue that different levels of complexity should be considered which also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two – and three– dimensional techniques. In the former case, we are interested in providing a coarse information about the object position/velocity in the image plane. In particular, a set of simple visual features (cues) is employed in an integrated framework where voting is used for fusing the responses from individual cues. The experimental evaluation shows the system performance for three different cases of camera–robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two–dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model– based system where a wire–frame model of the object is used to estimate its pose. Since a number of similar systems has been proposed in the literature, we concentrate on the particular part of the system usually neglected- automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured, everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real–word environment – a living room. ∗This research has been sponsored by the Swedish Foundation for Strategic Research through the Centre for Autonomous Systems. The funding is gratefully acknowledged.

    Object Recognition and Pose Estimation using Color Cooccurrence Histograms and Geometric Modeling

    Get PDF
    Robust techniques for object recognition and pose estimation are essential for robotic manipulation and object grasping. In this paper, a novel approach for object recognition and pose estimation based on color cooccurrence histograms and geometric model based techniques is presented. The particular problems addressed are: i) robust recognition of objects in natural scenes, ii) estimation of partial pose using an appearance based approach, and iii) complete 6DOF model based pose estimation and tracking. Our recognition scheme is based on the color cooccurrence histograms embedded in a classical learning framework that facilitates a “winner–takes–all ” strategy across different scales. The hypotheses generated in the recognition stage provide the basis to estimate the orientation of the object around the vertical axis. This prior, incomplete pose information is subsequently made precise by a technique that facilitates a geometric model of the object to estimate and continuously track the complete 6DOF pose of the object. Major contributions of the proposed system are the ability to automatically initiate the tracking process, its robustness and invariance towards scaling and translations and computational efficiency since both recognition and pose estimation rely on the same representation of the object. The performance of the system is evaluated in a domestic environment (living room) with changing lighting and background conditions on a set of everyday objects

    Abstract Model Based Techniques for Robotic Servoing and Grasping

    No full text
    A robotic manipulation of objects typically involves object detection/recognition, servoing to the object, alignment and grasping. To perform fine alignment and finally grasping, it is usually necessary to estimate position and orientation (pose) of the object. In this paper we present a model based tracking system used to estimate and continuously update the pose of the object to be manipulated. Here, a wire–frame model is used to find and track features in the consequent images. One of the important parts of the system is the ability to automatically initiate the tracking process. The strength of the system is the ability to operate in an domestic environment (living room) with changing lighting and background conditions.
    corecore