7 research outputs found

    Generic decoupled image-based visual servoing for cameras obeying the unified projection model

    Get PDF
    In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform

    Solution to the problem of designing a safe configuration of a human upper limb robotic prosthesis

    Get PDF
    На сегодняшний день остается актуальной разработка методов контроля позиционирования роботических манипуляторов с помощью систем технического зрения (СТЗ) с целью обеспечения безопасности пациентов и медицинского персонала при работе с медицинскими роботизированными реабилитационными устройствами. Целью исследования было разработать метод повышения безопасности применения роботизированных медицинских реабилитационных устройств путем разработки и апробации алгоритма расчета угловых положений роботизированных манипуляторов или роботических протезов, применяемых в восстановительном лечении и позволяющих воспроизвести естественную траекторию перемещения руки человека под контролем СТЗ. Дано описание роботизированного манипулятора, использованного при проведении исследований, представлены существующие подходы к расчету угловых положений приводов, а также описание предлагаемого алгоритма. Приведены сравнительные результаты работы предлагаемого алгоритма и существующих методов расчета угловых положений приводов роботизированных манипуляторов (роботических протезов) и предполагаемые направления для его доработки

    Distance-based and Orientation-based Visual Servoing from Three Points

    Get PDF
    International audienceThis paper is concerned with the use of a spherical-projection model for visual servoing from three points. We propose a new set of six features to control a 6-degree-of-freedom (DOF) robotic system with good decoupling properties. The first part of the set consists of three invariants to camera rotations. These invariants are built using the Cartesian distances between the spherical projections of the three points. The second part of the set corresponds to the angle-axis representation of a rotation matrix measured from the image of two points. Regarding the theoretical comparison with the classical perspective coordinates of points, the new set does not present more singularities. In addition, using the new set inside its nonsingular domain, a classical control law is proven to be optimal for pure rotational motions. The theoretical results and the robustness to points range errors of the new control scheme are validated through simulations and experiments on a 6-DOF robot arm

    Decoupled image-based visual servoing for cameras obeying the unified projection model

    Get PDF
    This paper proposes a generic decoupled imagebased control scheme for cameras obeying the unified projection model. The scheme is based on the spherical projection model. Invariants to rotational motion are computed from this projection and used to control the translational degrees of freedom. Importantly we form invariants which decrease the sensitivity of the interaction matrix to object depth variation. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6-DOF robotic platform

    Decoupled image-based visual servoing for cameras obeying the unified projection model

    No full text
    International audienceThis paper proposes a generic decoupled image-based control scheme for cameras obeying the unified projection model. The scheme is based on the spherical projection model. Invariants to rotational motion are computed from this projection and used to control the translational degrees of freedom (DOFs). Importantly, we form invariants that decrease the sensitivity of the interaction matrix to object-depth variation. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6-DOF robotic platform
    corecore