795 research outputs found

    Depth adaptive zooming visual servoing for a robot with a zooming camera

    Full text link
    To solve the view visibility problem and keep the observed object in the field of view (FOV) during the visual servoing, a depth adaptive zooming visual servoing strategy for a manipulator robot with a zooming camera is proposed. Firstly, a zoom control mechanism is introduced into the robot visual servoing system. It can dynamically adjust the camera's field of view to keep all the feature points on the object in the field of view of the camera and get high object local resolution at the end of visual servoing. Secondly, an invariant visual servoing method is employed to control the robot to the desired position under the changing intrinsic parameters of the camera. Finally, a nonlinear depth adaptive estimation scheme in the invariant space using Lyapunov stability theory is proposed to estimate adaptively the depth of the image features on the object. Three kinds of robot 4DOF visual positioning simulation experiments are conducted. The simulation experiment results show that the proposed approach has higher positioning precision. © 2013 Xin et al

    PAMPC: Perception-Aware Model Predictive Control for Quadrotors

    Full text link
    We present the first perception-aware model predictive control framework for quadrotors that unifies control and planning with respect to action and perception objectives. Our framework leverages numerical optimization to compute trajectories that satisfy the system dynamics and require control inputs within the limits of the platform. Simultaneously, it optimizes perception objectives for robust and reliable sens- ing by maximizing the visibility of a point of interest and minimizing its velocity in the image plane. Considering both perception and action objectives for motion planning and control is challenging due to the possible conflicts arising from their respective requirements. For example, for a quadrotor to track a reference trajectory, it needs to rotate to align its thrust with the direction of the desired acceleration. However, the perception objective might require to minimize such rotation to maximize the visibility of a point of interest. A model-based optimization framework, able to consider both perception and action objectives and couple them through the system dynamics, is therefore necessary. Our perception-aware model predictive control framework works in a receding-horizon fashion by iteratively solving a non-linear optimization problem. It is capable of running in real-time, fully onboard our lightweight, small-scale quadrotor using a low-power ARM computer, to- gether with a visual-inertial odometry pipeline. We validate our approach in experiments demonstrating (I) the contradiction between perception and action objectives, and (II) improved behavior in extremely challenging lighting conditions

    Global path-planning for constrained and optimal visual servoing

    Get PDF
    Visual servoing consists of steering a robot from an initial to a desired location by exploiting the information provided by visual sensors. This paper deals with the problem of realizing visual servoing for robot manipulators taking into account constraints such as visibility, workspace (that is obstacle avoidance), and joint constraints, while minimizing a cost function such as spanned image area, trajectory length, and curvature. To solve this problem, a new path-planning scheme is proposed. First, a robust object reconstruction is computed from visual measurements which allows one to obtain feasible image trajectories. Second, the rotation path is parameterized through an extension of the Euler parameters that yields an equivalent expression of the rotation matrix as a quadratic function of unconstrained variables, hence, largely simplifying standard parameterizations which involve transcendental functions. Then, polynomials of arbitrary degree are used to complete the parametrization and formulate the desired constraints and costs as a general optimization problem. The optimal trajectory is followed by tracking the image trajectory with an IBVS controller combined with repulsive potential fields in order to fulfill the constraints in real conditions. © 2007 IEEE.published_or_final_versio

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Designing image trajectories in the presence of uncertain data for robust visual servoing path-planning

    Get PDF
    Path-planning allows one to steer a camera to a desired location while taking into account the presence of constraints such as visibility, workspace, and joint limits. Unfortunately, the planned path can be significantly different from the real path due to the presence of uncertainty on the available data, with the consequence that some constraints may be not fulfilled by the real path even if they are satisfied by the planned path. In this paper we address the problem of performing robust path-planning, i.e. computing a path that satisfies the required constraints not only for the nominal model as in traditional path-planning but rather for a family of admissible models. Specifically, we consider an uncertain model where the point correspondences between the initial and desired views and the camera intrinsic parameters are affected by unknown random uncertainties with known bounds. The difficulty we have to face is that traditional path-planning schemes applied to different models lead to different paths rather than to a common and robust path. To solve this problem we propose a technique based on polynomial optimization where the required constraints are imposed on a number of trajectories corresponding to admissible camera poses and parameterized by a common design variable. The planned image trajectory is then followed by using an IBVS controller. Simulations carried out with all typical uncertainties that characterize a real experiment illustrate the proposed strategy and provide promising results. © 2009 IEEE.published_or_final_versio

    Conferring robustness to path-planning for image-based control

    Get PDF
    Path-planning has been proposed in visual servoing for reaching the desired location while fulfilling various constraints. Unfortunately, the real trajectory can be significantly different from the reference trajectory due to the presence of uncertainties on the model used, with the consequence that some constraints may not be fulfilled hence leading to a failure of the visual servoing task. This paper proposes a new strategy for addressing this problem, where the idea consists of conferring robustness to the path-planning scheme by considering families of admissible models. In order to obtain these families, uncertainty in the form of random variables is introduced on the available image points and intrinsic parameters. Two families are considered, one by generating a given number of admissible models corresponding to extreme values of the uncertainty, and one by estimating the extreme values of the components of the admissible models. Each model of these families identifies a reference trajectory, which is parametrized by design variables that are common to all the models. The design variables are hence determined by imposing that all the reference trajectories fulfill the required constraints. Discussions on the convergence and robustness of the proposed strategy are provided, in particular showing that the satisfaction of the visibility and workspace constraints for the second family ensures the satisfaction of these constraints for all models bounded by this family. The proposed strategy is illustrated through simulations and experiments. © 2011 IEEE.published_or_final_versio
    • …
    corecore