47 research outputs found

    The Resolvability Ellipsoid for Sensor Based Manipulation

    No full text
    This paper presents a new sensor placement measure calledresolvability. This measure provides a technique for estimating the relative ability of various sensor systems, including single camera systems, stereo pairs, multi-baseline stereo systems, and 3D rangefinders, to accurately control visually manipulated objects. The measure also indicates the capability of a visual sensor to provide spatially accurate data on objects of interest.The termresolvability refers to the ability of a visual sensor to resolve object positions and orientations. Our main interest inresolvabilityis in determining the accuracy with which a manipulator being observed by a camera can visually servo an object to a goal position and orientation. Theresolvabilityellipsoid is introduced to illustrate the directional nature ofresolvability, and can be used to direct camera motion and adjust camera intrinsic parameters in real-time so that the servoing accuracy of the visual servoing system improves with camera-lens motion.The Jacobian mapping from task space to sensor space is derived for a single camera system, a stereo pair with parallel optical axes, and a stereo pair with perpendicular optical axes.Resolvability ellipsoids based on these mappings for various sensor configurations are presented. Visual servoing experiments demonstrate thatresolvabilitycan be used to direct camera-lens motion in order to increase the ability of a visually servoed manipulator to precisely servo objects.</p

    Increasing the Tracking Region of an Eye-in-Hand System by Singularity and Joint Limit Avoidance

    No full text
    A new control strategy is presented which visually tracks objects using a manipulator/camera system while simultaneously avoiding kinematic singularities and joint limits by moving in directions along which the tracking task space is unconstrained or redundant. A manipulability measure is introduced into the visual servoing objective function in order to derive the control law. The algorithms developed have been experimentally verified on an eye-in-hand system. Results demonstrate the effectiveness of the method by showing that the tracking region of a manipulator tracking objects with planar motion can be greatly increased.</p

    An Extendable Framework for Visual Servoing Using Environment Models

    No full text
    Visual servoing is a manipulation control strategy that precisely positions objects using imprecisely calibrated camera-lens-manipulator systems. In order to quickly and easily integrate sensor-based manipulation strategies such as visual servoing into robotic systems, a system framework and a task representation must exist which facilitates this integration. The framework must also be extendable so that obsolete sensor systems can be easily replaced or extended as new technologies become available. In this paper, we present a framework for expectation-based visual servoing which visually guides tasks based on the expected visual appearance of the task. The appearance of the task is generated by a model of the environment that uses texture-mapped geometric models to represent objects. A system structure which facilitates the integration of various configurations of visual servoing systems is presented, as well as a hardware implementation of the proposed system and experimental results using a stereo camera system.</p

    Increasing the Tracking Region of an Eye-in-Hand System Using Controlled Active Vision

    No full text
    This paper presents a control strategy which allows a manipulator/camera system to track objects with planar motion while simultaneously avoiding kinematic singularities by moving in directions along which the tracking task space is unconstrained. The projection of a cartesian manipulability gradient on these directions of motion is used to determine the magnitude and direction of the required motion for singularity avoidance, while the controlled active vision paradigm is used for tracking. The algorithms developed have been experimentally verified on an eye-in-hand system. Results demonstrate the effectiveness of the method by showing that the tracking region of a manipulator tracking objects with planar motion can be increased by a factor of between two and three. The overall accuracy of the tracking system is also improved.</p

    Integrating Sensor Placement and Visual Tracking Strategies

    No full text
    Real-time visual feedback is an important capability that many robotic systems must possess if these systems are to operate successfully in dynamically varying and/or uncalibrated environments. An eye-in-hand system is a common technique for providing camera motion to increase the working region of a visual sensor. Although eye-in-hand robotic systems have been well-studied, several deficiencies in proposed systems make them inadequate for actual use. Typically, the systems fail if manipulators pass through singularities or joint limits. Objects being tracked can be lost if the objects become defocused, occluded, or if features on the objects lie outside the field of view of the camera. In this paper, a technique is introduced for integrating a visual tracking strategy with dynamically determined sensor placement criteria. This allows the system to automatically determine, in real-time, proper camera motion for tracking objects successfully while accounting for the undesirable, but often unavoidable, characteristics of camera-lens and manipulator systems. The sensor placement criteria considered include focus, field-of-view, spatial resolution, manipulator configuration, and a newly introduced measure calledresolvability. Experimental results are presented.</p

    Strategies for Increasing the Tracking Region of an Eye-in-Hand System by Singularity and Joint Limit Avoidance

    No full text
    An eye-in-hand system visually tracking objects can fail when the manipulator encounters a kinematic singularity or a joint limit. A solution to this problem is presented in which objects are visually tracked while the manipulator simultaneously avoids kinematic singularities and manipulator joint limits by moving in directions along which the tracking task space is unconstrained or redundant. A manipulability measure is introduced into the visual tracking objective function, providing an elegant and robust technique for deriving a control law that visually tracks objects while accounting for the configuration of the manipulator. Two different tracking strategies, one using a standard visual tracking strategy and the other using the newly proposed strategy, are experimentally compared on an actual hand/eye system. The experimental results demonstrate the effectiveness of the new method by showing that the tracking region of a manipulator tracking objects with planar motion can be greatly increased.</p

    Force and Vision Resolvability for Assimilating Disparate Sensory Feedback

    No full text
    Force and vision sensors provide complementary information, yet they are fundamentally different sensing modalities. This implies that traditional sensor integration techniques that require common data representations are not appropriate for combining the feedback from these two disparate sensor. In this paper, we introduce the concept of vision and force sensor resolvability as a means of comparing the ability of the two sensing modes to provide useful information during robotic manipulation tasks. By monitoring the resolvability of the two sensing modes with respect to the task, the information provided by the disparate sensors can be seamlessly assimilated during task execution. A nonlinear force/vision servoing algorithm that uses force and vision resolvability to switch between sensing modes is proposed. The advantages of the assimilation technique is demonstrated during contact transitions between a stiff manipulator and rigid environment, a system configuration that easily becomes unstable when force control alone is used. Experimental results show that robust contact transitions are made by the proposed nonlinear controller while simultaneously satisfying the conflicting task requirements of fast approach velocities, maintaining stability, minimizing impact forces, and suppressing bounce between contact surfaces.</p

    Improved Force Control Through Visual Servoing

    No full text
    Force controlled manipulation is a common technique for compliantly contacting and manipulating uncertain environments. Visual servoing is effective for reducing alignment uncertainties between objects using imprecisely calibrated camera-lens-manipulator systems. These two types of manipulator feedback, force and vision, represent complementary sensing modalities; visual feedback provides information over a relatively large area of the workspace without requiring contact with the environment, and force feedback provides highly localized and precise information upon contact. This paper presents three different strategies which combine force and vision within the feedback loop of a manipulator, traded control, hybrid control, and shared control. A discussion of the types of tasks that benefit from the strategies is included, as well as experimental results which show that the use of visual servoing to stably guide a manipulator simplifies the force control problem by allowing the effective use of low gain force control with relatively large stability margins.</p

    Vision and Force Driven Sensorimotor Primitives for Robotic Assembly Skills

    No full text
    Integrating sensors into robot systems is an important step towards increasing the flexibility of robotic manufacturing systems. Current sensor integration is largely task-specific which hinders flexibility. We are developing a sensorimotor command layer that encapsulates useful combinations of sensing and action which can be applied to many tasks within a domain. The sensorimotor commands provide a higher-level in which to terminate task strategy plans, which eases the development of sensor-driven robot programs. This paper reports on the development of both force and vision driven commands which are successfully applied to two different connector insertion experiments.</p
    corecore