10 research outputs found

    Potential problems and switching control for visual servoing

    Get PDF
    This paper proposes a potential switching scheme that enlarges the stable region of feature-based visual servoing. Relay images that interpolate initial and reference image features are generated by using affine transformation. Artificial potentials defined by the relay images are patched around the reference point of the original potential to enlarge the stable region. Simulations with simplified configuration and experiments on a 6 DOF robot show the validity of the proposed control scheme</p

    Control Servo-Visual de un Robot Manipulador Planar Basado en Pasividad

    Get PDF
    En este trabajo se diseรฑa un controlador servo visual basado en la propiedad de pasividad del sistema visual. Se propone un regulador con ganancias de control variables, de tal manera que se evita la saturacion de los actuadores y al mismo tiempo presenta la capacidad de corregir errores de pequena magnitud. Asimismo el diseno se hace teniendo en cuenta el desempeno L2, a fin de darle capacidad de seguimiento de objetos en movimiento, con un error de control pequeno. Se muestran resultados experimentales realizados en un robot manipulador industrial tipo planar para verificar el cumplimiento de los objetivos del controlador propuesto

    A Pursuit-Rendezvous Approach for Robotic Tracking

    Get PDF

    High-Performance Control of an On-Board Missile Seeker Using Vision Information

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 2. ํ•˜์ธ์ค‘.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋ฏธ์‚ฌ์ผ ํƒ์ƒ‰๊ธฐ์— ์˜์ƒ์„ผ์„œ๋ฅผ ๋„์ž…ํ•˜์—ฌ ๊ณ ์„ฑ๋Šฅ, ๊ณ ํšจ์œจ์˜ ์ œ์–ด ์„ฑ๋Šฅ์„ ๋ณด์žฅํ•  ์ˆ˜ ์žˆ๋Š” ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ๋ฅผ ์„ค๊ณ„ํ•œ๋‹ค. ํŠนํžˆ ํ‘œ์ ์˜ ๊นŠ์ด ์ •๋ณด ์—†์ด๋„ ์กฐ์ค€์„  ์˜ค์ฐจ๋ฅผ ๋น ๋ฅด๊ฒŒ 0์œผ๋กœ ์ˆ˜๋ ด์‹œ์ผœ ํ‘œ์ ์„ ์ถ”์ ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ์ œ์•ˆํ•˜๊ณ  ์žˆ๋Š” ์ƒˆ๋กœ์šด ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ๋Š” ํ‘œ์  ์šด๋™์— ๋Œ€ํ•œ ์„ ํ˜• ์‹œ๋ถˆ๋ณ€ ์ถ”์ •๊ธฐ๋ฅผ ๋„์ž…ํ•˜์˜€๋‹ค. ๋จผ์ € ์˜์ƒ์„ผ์„œ๋กœ๋ถ€ํ„ฐ ํš๋“ํ•œ ์ •๋ณด๋ฅผ ์ด์šฉํ•˜์—ฌ ์ด๋™ ํ‘œ์ ์— ๋Œ€ํ•œ ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ ํ–‰๋ ฌ์„ ์œ ๋„ํ•˜์˜€๋‹ค. ๋˜ํ•œ ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ ํ–‰๋ ฌ์„ ๋ถ„ํ•ดํ•˜์—ฌ ํ‘œ์ ๊ณผ ํƒ์ƒ‰๊ธฐ์˜ ์šด๋™์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ํš๋“ํ•œ๋‹ค. ํš๋“ํ•œ ์šด๋™ ์ •๋ณด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋ณ‘์ง„ ์šด๋™ ํ‘œ์ ์˜ ๋™์—ญํ•™ ๋ฐฉ์ •์‹๊ณผ ์ธก์ • ๋ฐฉ์ •์‹์„ ์„ ํ˜• ์‹œ๋ถˆ๋ณ€ ์‹œ์Šคํ…œ์œผ๋กœ ๋‚˜ํƒ€๋‚ธ๋‹ค. ์ด๋Ÿฌํ•œ ๋ณ‘์ง„ ์šด๋™ ํ‘œ์  ๋™์—ญํ•™ ๋ฐฉ์ •์‹์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ํ‘œ์ ์˜ ํฌ๊ธฐ์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ์„ ๊ณ ๋ คํ•œ ๋ฃจ์—”๋ฒ„๊ฑฐ ๊ด€์ธก๊ธฐ ํ˜•ํƒœ์˜ ํ‘œ์  ๋ณ‘์ง„ ์šด๋™ ์ •๋ณด ์ถ”์ •๊ธฐ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ๋„์ž…ํ•œ ํ‘œ์  ๋ณ‘์ง„ ์šด๋™ ์ •๋ณด ์ถ”์ •๊ธฐ๋Š” ๊ธฐ์กด์˜ ๊ธฐ๋ฒ•๋“ค๊ณผ ๋‹ฌ๋ฆฌ ์˜์ƒ ์„ผ์„œ์˜ ์›€์ง์ž„๊ณผ ๊ด€๊ณ„์—†์ด ํ•ญ์ƒ ์ˆ˜๋ ด์„ฑ์„ ๋ณด์žฅํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Š” ๋ฏธ์‚ฌ์ผ ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ์— ํ™œ์šฉํ•˜๊ธฐ ๋งค์šฐ ์ ํ•ฉํ•œ ํ˜•ํƒœ์ด๋‹ค. ๋˜ํ•œ ์˜์ƒ ์„ผ์„œ๋ฅผ ํ™œ์šฉํ•œ ํƒ์ƒ‰๊ธฐ์˜ ๋™์—ญํ•™ ๋ฐฉ์ •์‹์„ ์œ ๋„ํ•˜์˜€๋‹ค. ํƒ์ƒ‰๊ธฐ์˜ ๋™์—ญํ•™ ๋ฐฉ์ •์‹๊ณผ ํ‘œ์  ๋ณ‘์ง„ ์šด๋™ ์ถ”์ •๊ธฐ์—์„œ ํš๋“ํ•œ ํ‘œ์ ์˜ ์šด๋™ ์ •๋ณด๋ฅผ ์ด์šฉํ•˜์—ฌ ์‹œ์„  ๋ณ€ํ™”์œจ ์ถ”์ •๊ธฐ๋ฅผ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๋” ๋‚˜์•„๊ฐ€ ์„ค๊ณ„ํ•œ ์ถ”์ •๊ธฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์‹œ์„  ๋ณ€ํ™”์œจ์„ ๋ณด์ƒํ•œ ํƒ์ƒ‰๊ธฐ ์ œ์–ด ๋ช…๋ น์„ ์ƒ์„ฑํ•˜๋„๋ก ํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ๊ฐ€ ํ‘œ์ ์„ ์ถ”์ ํ•  ์ˆ˜ ์žˆ์Œ์„ ์ฆ๋ช…ํ•˜๊ธฐ ์œ„ํ•ด ์ˆ˜ํ•™์ ์œผ๋กœ ์—„๋ฐ€ํ•œ ๋ถ„์„์„ ์ œ๊ณตํ•œ๋‹ค. ๋˜ํ•œ ๋ชจ์˜ ์‹คํ—˜์„ ์‹คํ–‰ํ•˜์—ฌ ๊ธฐ์กด์˜ ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ์™€ ์„ฑ๋Šฅ์„ ๋น„๊ตํ•˜๊ณ  ์ œ์•ˆํ•œ ๊ธฐ๋ฒ•์˜ ์‹ค์šฉ์„ฑ์„ ์ž…์ฆํ•˜๋„๋ก ํ•œ๋‹ค.This dissertation proposes a high-performance controller of an on-board missile seeker using vision information. The seeker controller can approach to a moving target without knowing the information of the target depth. Our approach consists of two parts: 1) an innovative time invariant linear estimator of the target motion, 2) a nonlinear seeker controller. First, by using the parameters of the homography matrix for a moving target, we derive the dynamic equation of a moving target as a time invariant system. This equation was derived under the assumption that the velocities of both seeker and the target are varying slowly. Based on the derived dynamic equation of the target motion, an innovative time invariant linear estimator is constructed, which could provide the information of target velocity. Different from the previous works, the proposed estimator does not require any motion of the seeker, such as snaking or accelerating of the seeker, for estimation convergence. Besides, it can guarantee the convergence even without knowing the information of the target depth. Next, a nonlinear seeker controller to bring the boresight error down to zero is proposed. We present some rigorous mathematical convergence analysis to demonstrate that the proposed seeker controller can track the moving target even when the information of the target depth is not given. Furthermore, we present the simulation result of conventional seeker controller to clarify the practicability of the proposed seeker controller. Thus, the proposed approach should be used and applied widely in industries and military applications.1. ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ ๋ฐฐ๊ฒฝ 1 1.2 ์—ฐ๊ตฌ ๋ชฉํ‘œ 5 2. ๊ธฐ์กด ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ๋ฒ• 7 2.1 ํ‘œ์ ์˜ ์šด๋™์„ ๊ณ ๋ คํ•œ ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ ํ–‰๋ ฌ 8 2.2 ์ด๋™ ํ‘œ์ ์„ ๊ณ ๋ คํ•œ ํƒ์ƒ‰๊ธฐ ์ œ์–ด ๊ธฐ๋ฒ• 12 2.2.1 ๋ฏธ์‚ฌ์ผ ํƒ์ƒ‰๊ธฐ์˜ ๋™์—ญํ•™ ๋ฐฉ์ •์‹ 12 2.2.2 ํ‘œ์  6์ž์œ ๋„ ์šด๋™ ์ •๋ณด ์ถ”์ • ๊ธฐ๋ฒ• 19 2.3 ํ‘œ์  ๋ณ‘์ง„ ์šด๋™ ์ถ”์ • ๊ธฐ๋ฒ• 23 3. ์ƒˆ๋กœ์šด ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ ์„ค๊ณ„ 40 3.1 ์ƒˆ๋กœ์šด ์ œ์–ด๊ธฐ ์„ค๊ณ„ ๋ฐ ๋ถ„์„ 40 3.2 ๋ชจ์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 51 4. ๊ฒฐ๋ก  ๋ฐ ํ–ฅํ›„ ์—ฐ๊ตฌ ๊ณผ์ œ 57 ์ฐธ๊ณ ๋ฌธํ—Œ 59 Abstract 67Maste

    Model free visual servoing in macro and micro domain robotic applications

    Get PDF
    This thesis explores model free visual servoing algorithms by experimentally evaluating their performances for various tasks performed both in macro and micro domains. Model free or so called uncalibrated visual servoing does not need the system (vision system + robotic system) calibration and the model of the observed scene, since it provides an online estimation of the composite (image + robot) Jacobian. It is robust to parameter changes and disturbances. A model free visual servoing scheme is tested on a 7 DOF Mitsubishi PA10 robotic arm and on a microassembly workstation which is developed in our lab. In macro domain, a new approach for planar shape alignment is presented. The alignment task is performed based on bitangent points which are acquired using convex-hull of a curve. Both calibrated and uncalibrated visual servoing schemes are employed and compared. Furthermore, model free visual servoing is used for various trajectory following tasks such as square, circle, sine etc. and these reference trajectories are generated by a linear interpolator which produces midway targets along them. Model free visual servoing can provide more exibility in microsystems, since the calibration of the optical system is a tedious and error prone process, and recalibration is required at each focusing level of the optical system. Therefore, micropositioning and three di erent trajectory following tasks are also performed in micro world. Experimental results validate the utility of model free visual servoing algorithms in both domains

    Design and implementation of a vision system for microassembly workstation

    Get PDF
    Rapid development of micro/nano technologies and the evolvement of biotechnology have led to the research of assembling micro components into complex microsystems and manipulation of cells, genes or similar biological components. In order to develop advanced inspection/handling systems and methods for manipulation and assembly of micro products and micro components, robust micromanipulation and microassembly strategies can be implemented on a high-speed, repetitive, reliable, reconfigurable, robust and open-architecture microassembly workstation. Due to high accuracy requirements and specific mechanical and physical laws which govern the microscale world, micromanipulation and microassembly tasks require robust control strategies based on real-time sensory feedback. Vision as a passive sensor can yield high resolutions of micro objects and micro scenes along with a stereoscopic optical microscope. Visual data contains useful information for micromanipulation and microassembly tasks, and can be processed using various image processing and computer vision algorithms. In this thesis, the initial work on the design and implementation of a vision system for microassembly workstation is introduced. Both software and hardware issues are considered. Emphasis is put on the implementation of computer vision algorithms and vision based control techniques which help to build strong basis for the vision part of the microassembly workstation. The main goal of designing such a vision system is to perform automated micromanipulation and microassembly tasks for a variety of applications. Experiments with some teleoperated and semiautomated tasks, which aim to manipulate micro particles manually or automatically by microgripper or probe as manipulation tools, show quite promising results

    Visual and Kinematic Coordinated Control of Mobile Manipulating Unmanned Aerial Vehicles

    Get PDF
    Manipulating objects using arms mounted to unmanned aerial vehicles (UAVs) is attractive because UAVs may access many locations that are otherwise inaccessible to traditional mobile manipulation platforms such as ground vehicles. Historically, UAVs have been employed in ways that avoid interaction with the environment at all costs. The recent trend of increasing small UAV lift capacity and the reduction of the weight of manipulator components make the realization of mobile manipulating UAVs imminent. Despite recent work, several major challenges remain to be overcome before it will be common practice to manipulate objects from UAVs. Among these challenges, the constantly moving UAV platform and compliance of manipulator arms make it difficult to position the UAV and end-effector relative to an object of interest precisely enough for reliable manipulation. Solving this challenge will bring UAVs one step closer to being able to perform meaningful tasks such as infrastructure repair, disaster response, law enforcement, and personal assistance. Toward a solution to this challenge, this thesis describes a way forward that uses the UAV as a means to crudely position a manipulator within reach of the end-effector's goal position in the world. The manipulator then performs the fine positioning of the end-effector, rejecting position perturbations caused by UAV motions. An algorithm to coordinate the redundant degrees of freedom of an aerial manipulation system is described that allows the motions of the manipulator to serve as inputs to the UAV's position controller. To demonstrate this algorithm, the manipulator's six degrees of freedom are servoed using visual sensing to drive an eye-in-hand camera to a specified pose relative to a target while treating motions of the host platform as perturbations. Simultaneously, the host platform's degrees of freedom are regulated using kinematic information from the manipulator. This ultimately drives the UAV to a position that allows the manipulator to assume a pose relative to the UAV that maximizes reachability, thus facilitating the arm's ability to compensate for undesired UAV motions. Maintaining this loose kinematic coupling between the redundant degrees of freedom of the host UAV and manipulator allows this type of controller to be applied to a wide variety of platforms, including manned aircraft, rather than a single instance of a purpose-built system. As a result of this loose coupling, careful consideration must be given to the manipulator design so that it can achieve useful poses while minimally influencing the stability of the host UAV. Accordingly, the novel application of a parallel manipulator mechanism is described.Ph.D., Mechanical Engineering -- Drexel University, 201

    Visual servo control on a humanoid robot

    Get PDF
    Includes bibliographical referencesThis thesis deals with the control of a humanoid robot based on visual servoing. It seeks to confer a degree of autonomy to the robot in the achievement of tasks such as reaching a desired position, tracking or/and grasping an object. The autonomy of humanoid robots is considered as crucial for the success of the numerous services that this kind of robots can render with their ability to associate dexterity and mobility in structured, unstructured or even hazardous environments. To achieve this objective, a humanoid robot is fully modeled and the control of its locomotion, conditioned by postural balance and gait stability, is studied. The presented approach is formulated to account for all the joints of the biped robot. As a way to conform the reference commands from visual servoing to the discrete locomotion mode of the robot, this study exploits a reactive omnidirectional walking pattern generator and a visual task Jacobian redefined with respect to a floating base on the humanoid robot, instead of the stance foot. The redundancy problem stemming from the high number of degrees of freedom coupled with the omnidirectional mobility of the robot is handled within the task priority framework, allowing thus to achieve con- figuration dependent sub-objectives such as improving the reachability, the manipulability and avoiding joint limits. Beyond a kinematic formulation of visual servoing, this thesis explores a dynamic visual approach and proposes two new visual servoing laws. Lyapunov theory is used first to prove the stability and convergence of the visual closed loop, then to derive a robust adaptive controller for the combined robot-vision dynamics, yielding thus an ultimate uniform bounded solution. Finally, all proposed schemes are validated in simulation and experimentally on the humanoid robot NAO
    corecore