13 research outputs found

    Solution to the problem of designing a safe configuration of a human upper limb robotic prosthesis

    Get PDF
    На сегодняшний день остается актуальной разработка методов контроля позиционирования роботических манипуляторов с помощью систем технического зрения (СТЗ) с целью обеспечения безопасности пациентов и медицинского персонала при работе с медицинскими роботизированными реабилитационными устройствами. Целью исследования было разработать метод повышения безопасности применения роботизированных медицинских реабилитационных устройств путем разработки и апробации алгоритма расчета угловых положений роботизированных манипуляторов или роботических протезов, применяемых в восстановительном лечении и позволяющих воспроизвести естественную траекторию перемещения руки человека под контролем СТЗ. Дано описание роботизированного манипулятора, использованного при проведении исследований, представлены существующие подходы к расчету угловых положений приводов, а также описание предлагаемого алгоритма. Приведены сравнительные результаты работы предлагаемого алгоритма и существующих методов расчета угловых положений приводов роботизированных манипуляторов (роботических протезов) и предполагаемые направления для его доработки

    Image noise induced errors in camera positioning

    Get PDF
    The problem of evaluating worst-case camera positioning error induced by unknown-but-bounded (UBB) image noise for a given object-camera configuration is considered. Specifically, it is shown that upper bounds to the rotation and translation worst-case error for a certain image noise intensity can be obtained through convex optimizations. These upper bounds, contrary to lower bounds provided by standard optimization tools, allow one to design robust visual servo systems. © 2007 IEEE.published_or_final_versio

    Estimation of the rigid transformation between two cameras from the Fundamental Matrix VS from Homographies.

    Get PDF
    The 3D reconstruction is an important step for the analytical calculation of the Jacobian of the image in a process of visual control of robots. In a two-camera stereo system that reconstruction depends on the knowledge of the rigid transformation between the two cameras and is represented by the rotation and translation between them. These two parameters are the result of a calibration of the stereo pair, but can also be retrieved from the epipolar geometry of the system, or from a homography obtained by features belonging to a flat object in the scene. In this paper, we make an assessment of the latter two alternatives, taking as reference an Euclidean reconstruction eliminating image distortion. We analyze three cases: the distortion inherent in the camera is corrected, without corrected distortion, and when Gaussian noise is added to the detection of features

    PWM and PFM for visual servoing in fully decoupled approaches

    Full text link
    In this paper, novel visual servoing techniques based on Pulse Width Modulation (PWM) and Pulse Frequency Modulation (PFM) are presented. In order to apply previous pulse modulations, a fully decoupled position based visual servoing approach (i.e. with block-diagonal interaction matrix) is considered, controlling independently translational and rotational camera motions. These techniques, working at high frequency, could be considered to address the sensor latency problem inherent in visual servoing systems. The expected appearance of ripple due to the concentration of the control action in pulses is quantified and analyzed under simulated scenario. This high frequency ripple does not affect the system performance since it is filtered by the manipulator dynamics. On the contrary it can be seen as a dither signal to minimize the impact of friction and overcome back-lashing.This work was supported in part by the Spanish Government under Grant BES-2010-038486 and Project DPI2013-42302-R.Muñoz Benavent, P.; Solanes Galbis, JE.; Gracia Calandin, LI.; Tornero Montserrat, J. (2015). PWM and PFM for visual servoing in fully decoupled approaches. Robotics and Autonomous Systems. 65(1):57-64. doi:10.1016/j.robot.2014.11.011S576465

    Global path-planning for constrained and optimal visual servoing

    Get PDF
    Visual servoing consists of steering a robot from an initial to a desired location by exploiting the information provided by visual sensors. This paper deals with the problem of realizing visual servoing for robot manipulators taking into account constraints such as visibility, workspace (that is obstacle avoidance), and joint constraints, while minimizing a cost function such as spanned image area, trajectory length, and curvature. To solve this problem, a new path-planning scheme is proposed. First, a robust object reconstruction is computed from visual measurements which allows one to obtain feasible image trajectories. Second, the rotation path is parameterized through an extension of the Euler parameters that yields an equivalent expression of the rotation matrix as a quadratic function of unconstrained variables, hence, largely simplifying standard parameterizations which involve transcendental functions. Then, polynomials of arbitrary degree are used to complete the parametrization and formulate the desired constraints and costs as a general optimization problem. The optimal trajectory is followed by tracking the image trajectory with an IBVS controller combined with repulsive potential fields in order to fulfill the constraints in real conditions. © 2007 IEEE.published_or_final_versio

    Keeping Features in the Field of View in Eye-In-Hand Visual Servoing: A Switching Approach

    Full text link

    Visual servoing of an under-actuated dynamic rigid-body system: an image-based approach

    Full text link

    Robotic Ball Catching with an Eye-in-Hand Single-Camera System

    Get PDF
    In this paper, a unified control framework is proposed to realize a robotic ball catching task with only a moving single-camera (eye-in-hand) system able to catch flying, rolling, and bouncing balls in the same formalism. The thrown ball is visually tracked through a circle detection algorithm. Once the ball is recognized, the camera is forced to follow a baseline in the space so as to acquire an initial dataset of visual measurements. A first estimate of the catching point is initially provided through a linear algorithm. Then, additional visual measurements are acquired to constantly refine the current estimate by exploiting a nonlinear optimization algorithm and a more accurate ballistic model. A classic partitioned visual servoing approach is employed to control the translational and rotational components of the camera differently. Experimental results performed on an industrial robotic system prove the effectiveness of the presented solution. A motion-capture system is employed to validate the proposed estimation process via ground truth
    corecore