15 research outputs found

    2 1/2 D Visual servoing with respect to unknown objects through a new estimation scheme of camera displacement

    Get PDF
    Abstract. Classical visual servoing techniques need a strong a priori knowledge of the shape and the dimensions of the observed objects. In this paper, we present how the 2 1/2 D visual servoing scheme we have recently developed, can be used with unknown objects characterized by a set of points. Our scheme is based on the estimation of the camera displacement from two views, given by the current and desired images. Since vision-based robotics tasks generally necessitate to be performed at video rate, we focus only on linear algorithms. Classical linear methods are based on the computation of the essential matrix. In this paper, we propose a different method, based on the estimation of the homography matrix related to a virtual plane attached to the object. We show that our method provides a more stable estimation when the epipolar geometry degenerates. This is particularly important in visual servoing to obtain a stable control law, especially near the convergence of the system. Finally, experimental results confirm the improvement in the stability, robustness, and behaviour of our scheme with respect to classical methods. Keywords: visual servoing, projective geometry, homography 1

    Estimation of the camera pose from image point correspondences through the essential matrix and convex optimization

    Get PDF
    Estimating the camera pose in stereo vision systems is an important issue in computer vision and robotics. One popular way to handle this problem consists of determining the essential matrix which minimizes the algebraic error obtained from image point correspondences. Unfortunately, this search amounts to solving a nonconvex optimization, and the existing methods either rely on some approximations in order to get rid of the non-convexity or provide a solution that may be affected by the presence of local minima. This paper proposes a new approach to address this search without presenting such problems. In particular, we show that the sought essential matrix can be obtained by solving a convex optimization built through a suitable reformulation of the considered minimization via appropriate techniques for representing polynomials. Numerical results show the proposed approach compares favorably with some standard methods in both cases of synthetic data and real data. ยฉ 2009 IEEE.published_or_final_versio

    Designing image trajectories in the presence of uncertain data for robust visual servoing path-planning

    Get PDF
    Path-planning allows one to steer a camera to a desired location while taking into account the presence of constraints such as visibility, workspace, and joint limits. Unfortunately, the planned path can be significantly different from the real path due to the presence of uncertainty on the available data, with the consequence that some constraints may be not fulfilled by the real path even if they are satisfied by the planned path. In this paper we address the problem of performing robust path-planning, i.e. computing a path that satisfies the required constraints not only for the nominal model as in traditional path-planning but rather for a family of admissible models. Specifically, we consider an uncertain model where the point correspondences between the initial and desired views and the camera intrinsic parameters are affected by unknown random uncertainties with known bounds. The difficulty we have to face is that traditional path-planning schemes applied to different models lead to different paths rather than to a common and robust path. To solve this problem we propose a technique based on polynomial optimization where the required constraints are imposed on a number of trajectories corresponding to admissible camera poses and parameterized by a common design variable. The planned image trajectory is then followed by using an IBVS controller. Simulations carried out with all typical uncertainties that characterize a real experiment illustrate the proposed strategy and provide promising results. ยฉ 2009 IEEE.published_or_final_versio

    Euclidean Calculation of Feature Points of a Rotating Satellite: A Daisy Chaining Approach

    Full text link
    The occlusion of feature points and/or feature points leaving the field of view of a camera is a significant practical problem that can lead to degraded performance or instability of visual servo control and vision-based estimation algorithms. By assuming that one knownEuclidean distance between two feature points in an initial view is available, homography relationships and image geometry are used in this paper to determine the Euclidean coordinates of feature points in the field of view. A new daisy-chainingmethod is then used to relate the position and orientation of a plane defined by the feature points to other feature-point planes that are rigidly connected. Through these relationships, the Euclidean coordinates of the original feature points can be tracked even as they leave the field of view. This objective is motivated by the desire to track the Euclidean coordinates of feature points on one face of a satellite as it continually rotates and feature points become self-occluded. A numerical simulation is included to demonstrate that the Euclidean coordinates can be tracked even when they leave the field of view. However, the results indicate the need for amethod to reconcile any accumulated errorwhen the feature points return to thefield of view. Nomenclature A = intrinsic camera-calibration matrix dj = distance to j plane along nj F j, Fj = frames attached to the j and j planes Gj = projective homography matrix of the jth frame Hj = Euclidean homography matrix of the jth frame I = fixed coordinate frame attached to the camera mji, m ji = normalized Euclidean coordinate of the ith feature point of the j and j planes expressed in

    ViSP for visual servoing: a generic software platform with a wide class of robot control skills

    Get PDF
    Special issue on Software Packages for Vision-Based Control of Motion, P. Oh, D. Burschka (Eds.)International audienceViSP (Visual Servoing Platform), a fully functional modular architecture that allows fast development of visual servoing applications, is described. The platform takes the form of a library which can be divided in three main modules: control processes, canonical vision-based tasks that contain the most classical linkages, and real-time tracking. ViSP software environment features independence with respect to the hardware, simplicity, extendibility, and portability. ViSP also features a large library of elementary tasks with various visual features that can be combined together, an image processing library that allows the tracking of visual cues at video rate, a simulator, an interface with various classical framegrabbers, a virtual 6-DOF robot that allows the simulation of visual servoing experiments, etc. The platform is implemented in C++ under Linux

    An Investigation of Nonlinear Estimation and System Design for Mechatronic Systems

    Get PDF
    This thesis is a collection of two projects in which the author was involved during his master\u27s degree program. The first project involves the estimation of 3D Euclidean coordinates of features from 2D images. A 3D Euclidean position estimation strategy is developed for a static object using a single moving camera whose motion is known. This Euclidean depth estimator has a very simple mathematical structure and is easy to implement. Numerical simulations and experimental results using a mobile robot in an indoor environment are presented to illustrate the performance of the algorithm. The second section describes the design of a test system for the Argon Environment Electrical Study (AEES) conducted by the Department of Energy (DOE). The initial research proposal, safety review, and literature review are presented. Additionally, the test plan and system design are highlighted

    High-Performance Control of an On-Board Missile Seeker Using Vision Information

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 2. ํ•˜์ธ์ค‘.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋ฏธ์‚ฌ์ผ ํƒ์ƒ‰๊ธฐ์— ์˜์ƒ์„ผ์„œ๋ฅผ ๋„์ž…ํ•˜์—ฌ ๊ณ ์„ฑ๋Šฅ, ๊ณ ํšจ์œจ์˜ ์ œ์–ด ์„ฑ๋Šฅ์„ ๋ณด์žฅํ•  ์ˆ˜ ์žˆ๋Š” ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ๋ฅผ ์„ค๊ณ„ํ•œ๋‹ค. ํŠนํžˆ ํ‘œ์ ์˜ ๊นŠ์ด ์ •๋ณด ์—†์ด๋„ ์กฐ์ค€์„  ์˜ค์ฐจ๋ฅผ ๋น ๋ฅด๊ฒŒ 0์œผ๋กœ ์ˆ˜๋ ด์‹œ์ผœ ํ‘œ์ ์„ ์ถ”์ ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ์ œ์•ˆํ•˜๊ณ  ์žˆ๋Š” ์ƒˆ๋กœ์šด ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ๋Š” ํ‘œ์  ์šด๋™์— ๋Œ€ํ•œ ์„ ํ˜• ์‹œ๋ถˆ๋ณ€ ์ถ”์ •๊ธฐ๋ฅผ ๋„์ž…ํ•˜์˜€๋‹ค. ๋จผ์ € ์˜์ƒ์„ผ์„œ๋กœ๋ถ€ํ„ฐ ํš๋“ํ•œ ์ •๋ณด๋ฅผ ์ด์šฉํ•˜์—ฌ ์ด๋™ ํ‘œ์ ์— ๋Œ€ํ•œ ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ ํ–‰๋ ฌ์„ ์œ ๋„ํ•˜์˜€๋‹ค. ๋˜ํ•œ ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ ํ–‰๋ ฌ์„ ๋ถ„ํ•ดํ•˜์—ฌ ํ‘œ์ ๊ณผ ํƒ์ƒ‰๊ธฐ์˜ ์šด๋™์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ํš๋“ํ•œ๋‹ค. ํš๋“ํ•œ ์šด๋™ ์ •๋ณด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋ณ‘์ง„ ์šด๋™ ํ‘œ์ ์˜ ๋™์—ญํ•™ ๋ฐฉ์ •์‹๊ณผ ์ธก์ • ๋ฐฉ์ •์‹์„ ์„ ํ˜• ์‹œ๋ถˆ๋ณ€ ์‹œ์Šคํ…œ์œผ๋กœ ๋‚˜ํƒ€๋‚ธ๋‹ค. ์ด๋Ÿฌํ•œ ๋ณ‘์ง„ ์šด๋™ ํ‘œ์  ๋™์—ญํ•™ ๋ฐฉ์ •์‹์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ํ‘œ์ ์˜ ํฌ๊ธฐ์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ์„ ๊ณ ๋ คํ•œ ๋ฃจ์—”๋ฒ„๊ฑฐ ๊ด€์ธก๊ธฐ ํ˜•ํƒœ์˜ ํ‘œ์  ๋ณ‘์ง„ ์šด๋™ ์ •๋ณด ์ถ”์ •๊ธฐ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ๋„์ž…ํ•œ ํ‘œ์  ๋ณ‘์ง„ ์šด๋™ ์ •๋ณด ์ถ”์ •๊ธฐ๋Š” ๊ธฐ์กด์˜ ๊ธฐ๋ฒ•๋“ค๊ณผ ๋‹ฌ๋ฆฌ ์˜์ƒ ์„ผ์„œ์˜ ์›€์ง์ž„๊ณผ ๊ด€๊ณ„์—†์ด ํ•ญ์ƒ ์ˆ˜๋ ด์„ฑ์„ ๋ณด์žฅํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Š” ๋ฏธ์‚ฌ์ผ ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ์— ํ™œ์šฉํ•˜๊ธฐ ๋งค์šฐ ์ ํ•ฉํ•œ ํ˜•ํƒœ์ด๋‹ค. ๋˜ํ•œ ์˜์ƒ ์„ผ์„œ๋ฅผ ํ™œ์šฉํ•œ ํƒ์ƒ‰๊ธฐ์˜ ๋™์—ญํ•™ ๋ฐฉ์ •์‹์„ ์œ ๋„ํ•˜์˜€๋‹ค. ํƒ์ƒ‰๊ธฐ์˜ ๋™์—ญํ•™ ๋ฐฉ์ •์‹๊ณผ ํ‘œ์  ๋ณ‘์ง„ ์šด๋™ ์ถ”์ •๊ธฐ์—์„œ ํš๋“ํ•œ ํ‘œ์ ์˜ ์šด๋™ ์ •๋ณด๋ฅผ ์ด์šฉํ•˜์—ฌ ์‹œ์„  ๋ณ€ํ™”์œจ ์ถ”์ •๊ธฐ๋ฅผ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๋” ๋‚˜์•„๊ฐ€ ์„ค๊ณ„ํ•œ ์ถ”์ •๊ธฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์‹œ์„  ๋ณ€ํ™”์œจ์„ ๋ณด์ƒํ•œ ํƒ์ƒ‰๊ธฐ ์ œ์–ด ๋ช…๋ น์„ ์ƒ์„ฑํ•˜๋„๋ก ํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ๊ฐ€ ํ‘œ์ ์„ ์ถ”์ ํ•  ์ˆ˜ ์žˆ์Œ์„ ์ฆ๋ช…ํ•˜๊ธฐ ์œ„ํ•ด ์ˆ˜ํ•™์ ์œผ๋กœ ์—„๋ฐ€ํ•œ ๋ถ„์„์„ ์ œ๊ณตํ•œ๋‹ค. ๋˜ํ•œ ๋ชจ์˜ ์‹คํ—˜์„ ์‹คํ–‰ํ•˜์—ฌ ๊ธฐ์กด์˜ ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ์™€ ์„ฑ๋Šฅ์„ ๋น„๊ตํ•˜๊ณ  ์ œ์•ˆํ•œ ๊ธฐ๋ฒ•์˜ ์‹ค์šฉ์„ฑ์„ ์ž…์ฆํ•˜๋„๋ก ํ•œ๋‹ค.This dissertation proposes a high-performance controller of an on-board missile seeker using vision information. The seeker controller can approach to a moving target without knowing the information of the target depth. Our approach consists of two parts: 1) an innovative time invariant linear estimator of the target motion, 2) a nonlinear seeker controller. First, by using the parameters of the homography matrix for a moving target, we derive the dynamic equation of a moving target as a time invariant system. This equation was derived under the assumption that the velocities of both seeker and the target are varying slowly. Based on the derived dynamic equation of the target motion, an innovative time invariant linear estimator is constructed, which could provide the information of target velocity. Different from the previous works, the proposed estimator does not require any motion of the seeker, such as snaking or accelerating of the seeker, for estimation convergence. Besides, it can guarantee the convergence even without knowing the information of the target depth. Next, a nonlinear seeker controller to bring the boresight error down to zero is proposed. We present some rigorous mathematical convergence analysis to demonstrate that the proposed seeker controller can track the moving target even when the information of the target depth is not given. Furthermore, we present the simulation result of conventional seeker controller to clarify the practicability of the proposed seeker controller. Thus, the proposed approach should be used and applied widely in industries and military applications.1. ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ ๋ฐฐ๊ฒฝ 1 1.2 ์—ฐ๊ตฌ ๋ชฉํ‘œ 5 2. ๊ธฐ์กด ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ๋ฒ• 7 2.1 ํ‘œ์ ์˜ ์šด๋™์„ ๊ณ ๋ คํ•œ ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ ํ–‰๋ ฌ 8 2.2 ์ด๋™ ํ‘œ์ ์„ ๊ณ ๋ คํ•œ ํƒ์ƒ‰๊ธฐ ์ œ์–ด ๊ธฐ๋ฒ• 12 2.2.1 ๋ฏธ์‚ฌ์ผ ํƒ์ƒ‰๊ธฐ์˜ ๋™์—ญํ•™ ๋ฐฉ์ •์‹ 12 2.2.2 ํ‘œ์  6์ž์œ ๋„ ์šด๋™ ์ •๋ณด ์ถ”์ • ๊ธฐ๋ฒ• 19 2.3 ํ‘œ์  ๋ณ‘์ง„ ์šด๋™ ์ถ”์ • ๊ธฐ๋ฒ• 23 3. ์ƒˆ๋กœ์šด ํƒ์ƒ‰๊ธฐ ์ œ์–ด๊ธฐ ์„ค๊ณ„ 40 3.1 ์ƒˆ๋กœ์šด ์ œ์–ด๊ธฐ ์„ค๊ณ„ ๋ฐ ๋ถ„์„ 40 3.2 ๋ชจ์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 51 4. ๊ฒฐ๋ก  ๋ฐ ํ–ฅํ›„ ์—ฐ๊ตฌ ๊ณผ์ œ 57 ์ฐธ๊ณ ๋ฌธํ—Œ 59 Abstract 67Maste
    corecore