128 research outputs found

    Real-Time Visual Servo Control of Two-Link and Three DOF Robot Manipulator

    Get PDF
    This project presents experimental results of position-based visual servoing control process of a 3R robot using 2 fixed cameras. Visual servoing concerns several field of research including vision systems, robotics and automatic control. This method deal with real time changes in the relative position of the target-object with respect to robot. It is have good accuracy with independency of Manipulator servo control structure from the target pose coordinates are the additional advantages of this method. The applications of visually guided systems are many: from intelligent homes to automotive industry. Visual servoing are also useful for a wide range of applications and it can be used to control many different systems (manipulator arms, mobile robots, aircraft, etc.). Visual servoing systems are generally divide depends on the number of camera, on the position of the camera with respect to the robot, on the design of the error function to robot. This project presents an approach for visual robot control. Existing approaches are increased in such a way that depth and position information of block or object is estimate during the motion of the robot. That is done by the visual tracking of an object throughout the trajectory. Vision designed robotics has been a major research area for more time. However, one of the open and commonly problems in the area is the requirement for exchange of the experiences and ideas. We also include a number of real–time examples from our own research. Forward and inverse kinematics of 3 DOF robot have been done then experiments on image processing, object shape recognition and pose estimation as well as target-block or object in Cartesian system and visual control of robot manipulator have been prescribed. Experimental results obtained from real-time system implementation of visual servo control and tests of 3DOF robot in lab

    Active Sensing for Dynamic, Non-holonomic, Robust Visual Servoing

    Get PDF
    We consider the problem of visually servoing a legged vehicle with unicycle-like nonholonomic constraints subject to second-order fore-aft dynamics in its horizontal plane. We target applications to rugged environments characterized by complex terrain likely to perturb significantly the robot’s nominal dynamics. At the same time, it is crucial that the camera avoid “obstacle” poses where absolute localization would be compromised by even partial loss of landmark visibility. Hence, we seek a controller whose robustness against disturbances and obstacle avoidance capabilities can be assured by a strict global Lyapunov function. Since the nonholonomic constraints preclude smooth point stabilizability we introduce an extra degree of sensory freedom, affixing the camera to an actuated panning axis mounted on the robot’s back. Smooth stabilizability to the robot-orientation-indifferent goal cycle no longer precluded, we construct a controller and strict global Lyapunov function with the desired properties. We implement several versions of the scheme on a RHex robot maneuvering over slippery ground and document its successful empirical performance. For more information: Kod*La

    Methods for visual servoing of robotic systems: A state of the art survey

    Get PDF
    U ovom preglednom radu su prikazane metode vizuelnog upravljanja robotskih sistema, sa primarnim fokusom na mobilne robote sa diferencijalnim pogonom. Analizirane su standardne metode vizuelnog upravljanja bazirane na (i) greškama u parametrima slike (engl. Image-Based Visual Servoing - IBVS) i (ii) izdvojenim karakteristikama sa slike neophodnim za estimaciju položaja izabranog objekta (engl. Position-Based Visual Servoing - PBVS) i poređene sa novom metodom direktnog vizuelnog upravljanja (engl. Direct Visual Servoing - DVS). U poređenju sa IBVS i PBVS metodama, DVS metod se odlikuje višom tačnošću, ali i manjim domenom konvergencije. Zbog ovog razloga je DVS metod upravljanja pogodan za integraciju u hibridne sisteme vizuelnog upravljanja. Takođe, predstavljeni su radovi koji unapređuju sistem vizuelnog upravljanja korišćenjem stereo sistema (sistem sa dve kamere). Stereo sistem, u poređenju sa alternativnim metodama, omogućava tačniju ocenu dubine karakterističnih objekata sa slike, koja je neophodna za zadatke vizuelnog upravljanja. Predmet analize su i radovi koji integrišu tehnike veštačke inteligencije u sistem vizuelnog upravljanja. Ovim tehnikama sistemi vizuelnog upravljanja dobijaju mogućnost da uče, čime se njihov domen primene znatno proširuje. Na kraju, napominje se i mogućnost integracije vizuelne odometrije u sisteme vizuelnog upravljanja, što prouzrokuje povećanje robusnosti čitavog robotskog sistema.This paper surveys the methods used for visual servoing of robotic systems, where the main focus is on mobile robot systems. The three main areas of research include the Direct Visual Servoing, stereo vision systems, and artificial intelligence in visual servoing. The standard methods such as Image-Based Visual Servoing (IBVS) and Position-Based Visual Servoing (PBVS) are analyzed and compared with the new method named Direct Visual Servoing (DVS). DVS methods have better accuracy, compared to IBVS and PBVS, but have limited convergence area. Because of their high accuracy, DVS methods are suitable for integration into hybrid systems. Furthermore, the use of the stereo systems for visual servoing is comprehensively analyzed. The main contribution of the stereo system is the accurate depth estimation, which is critical for many visual servoing tasks. The use of artificial intelligence (AI) in visual servoing purposes has also gained popularity over the years. AI techniques give visual servoing controllers the ability to learn by using predefined examples or empirical knowledge. The learning ability is crucial for the implementation of robotic systems in a real-world dynamic manufacturing environment. Also, we analyzed the use of visual odometry in combination with a visual servoing controller for creating more robust and reliable positioning system

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    Visual servoing with safe interaction using image moments

    Get PDF
    The problem of image based visual servoing for robots working in a cluttered dynamic environment is addressed in this paper. It is assumed that the environment is observed by depth sensors which allow to measure the distance between any moving obstacle and the robot. Also an eye-in-hand camera is used to extract image features. The main idea is to control suitable image moments and to relax a certain number of robot's degrees of freedom during the interaction phase. If an obstacle approaches the robot, the main visual servoing task is relaxed partially or completely, while the image features are kept in the camera field of view by controlling the image moments. Fuzzy rules are used to set the desired values of the image moments. Beside that, the relaxed redundancy of the robot is exploited to avoid collisions. After removing the risk of collision, the main visual servoing task is resumed. The effectiveness of the algorithm is shown by several case studies on a KUKA LWR 4 robot arm

    Perception Based Navigation for Underactuated Robots.

    Full text link
    Robot autonomous navigation is a very active field of robotics. In this thesis we propose a hierarchical approach to a class of underactuated robots by composing a collection of local controllers with well understood domains of attraction. We start by addressing the problem of robot navigation with nonholonomic motion constraints and perceptual cues arising from onboard visual servoing in partially engineered environments. We propose a general hybrid procedure that adapts to the constrained motion setting the standard feedback controller arising from a navigation function in the fully actuated case. This is accomplished by switching back and forth between moving "down" and "across" the associated gradient field toward the stable manifold it induces in the constrained dynamics. Guaranteed to avoid obstacles in all cases, we provide conditions under which the new procedure brings initial configurations to within an arbitrarily small neighborhood of the goal. We summarize with simulation results on a sample of visual servoing problems with a few different perceptual models. We document the empirical effectiveness of the proposed algorithm by reporting the results of its application to outdoor autonomous visual registration experiments with the robot RHex guided by engineered beacons. Next we explore the possibility of adapting the resulting first order hybrid feedback controller to its dynamical counterpart by introducing tunable damping terms in the control law. Just as gradient controllers for standard quasi-static mechanical systems give rise to generalized "PD-style" controllers for dynamical versions of those standard systems, we show that it is possible to construct similar "lifts" in the presence of non-holonomic constraints notwithstanding the necessary absence of point attractors. Simulation results corroborate the proposed lift. Finally we present an implementation of a fully autonomous navigation application for a legged robot. The robot adapts its leg trajectory parameters by recourse to a discrete gradient descent algorithm, while managing its experiments and outcome measurements autonomously via the navigation visual servoing algorithms proposed in this thesis.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58412/1/glopes_1.pd

    Vision-based trajectory tracking algorithm with obstacle avoidance for a wheeled mobile robot

    Get PDF
    Wheeled mobile robots are becoming increasingly important in industry as means of transportation, inspection, and operation because of their efficiency and flexibility. The design of efficient algorithms for autonomous or quasi-autonomous mobile robots navigation in dynamic environments is a challenging problem that has been the focus of many researchers dining the past few decades. Computer vision, maybe, is not the most successful sensing modality used in mobile robotics until now (sonar and infra-red sensors for example being preferred), but it is the sensor which is able to give the information ’’what” and ’’where” most completely for the objects a robot is likely to encounter. In this thesis, we deal with using vision system to navigate the mobile robot to track a reference trajectory and using a sensor-based obstacle avoidance method to pass by the objects located on the trajectory. A tracking control algorithm is also described in this thesis. Finally, The experimental results are presented to verify the tracking and control algorithms

    Robust Model Predictive Control for Linear Parameter Varying Systems along with Exploration of its Application in Medical Mobile Robots

    Get PDF
    This thesis seeks to develop a robust model predictive controller (MPC) for Linear Parameter Varying (LPV) systems. LPV models based on input-output display are employed. We aim to improve robust MPC methods for LPV systems with an input-output display. This improvement will be examined from two perspectives. First, the system must be stable in conditions of uncertainty (in signal scheduling or due to disturbance) and perform well in both tracking and regulation problems. Secondly, the proposed method should be practical, i.e., it should have a reasonable computational load and not be conservative. Firstly, an interpolation approach is utilized to minimize the conservativeness of the MPC. The controller is calculated as a linear combination of a set of offline predefined control laws. The coefficients of these offline controllers are derived from a real-time optimization problem. The control gains are determined to ensure stability and increase the terminal set. Secondly, in order to test the system's robustness to external disturbances, a free control move was added to the control law. Also, a Recurrent Neural Network (RNN) algorithm is applied for online optimization, showing that this optimization method has better speed and accuracy than traditional algorithms. The proposed controller was compared with two methods (robust MPC and MPC with LPV model based on input-output) in reference tracking and disturbance rejection scenarios. It was shown that the proposed method works well in both parts. However, two other methods could not deal with the disturbance. Thirdly, a support vector machine was introduced to identify the input-output LPV model to estimate the output. The estimated model was compared with the actual nonlinear system outputs, and the identification was shown to be effective. As a consequence, the controller can accurately follow the reference. Finally, an interpolation-based MPC with free control moves is implemented for a wheeled mobile robot in a hospital setting, where an RNN solves the online optimization problem. The controller was compared with a robust MPC and MPC-LPV in reference tracking, disturbance rejection, online computational load, and region of attraction. The results indicate that our proposed method surpasses and can navigate quickly and reliably while avoiding obstacles

    Sensor-based control of nonholonomic mobile robots

    Get PDF
    The problem of tracking a moving target with a nonholonomic mobile robot, by using sensor-based control techniques, is addressed. Two control design methods, relying on the transverse function approach, are proposed. For the first method, sensory signals are used to calculate an estimate of the relative pose of the robot with respect to the target. This estimate is then used for the calculation of control laws expressed in Cartesian coordinates. An analysis of stability and robustness w.r.t. pose estimation errors is presented. The second method consists in designing the control law directly in the space of sensor signals. Both methods are simulated, with various choices of the control parameters, for a unicycle-type mobile robot equipped with a camera. Finally, experimental results are also reported
    corecore