761 research outputs found

    PAMPC: Perception-Aware Model Predictive Control for Quadrotors

    Full text link
    We present the first perception-aware model predictive control framework for quadrotors that unifies control and planning with respect to action and perception objectives. Our framework leverages numerical optimization to compute trajectories that satisfy the system dynamics and require control inputs within the limits of the platform. Simultaneously, it optimizes perception objectives for robust and reliable sens- ing by maximizing the visibility of a point of interest and minimizing its velocity in the image plane. Considering both perception and action objectives for motion planning and control is challenging due to the possible conflicts arising from their respective requirements. For example, for a quadrotor to track a reference trajectory, it needs to rotate to align its thrust with the direction of the desired acceleration. However, the perception objective might require to minimize such rotation to maximize the visibility of a point of interest. A model-based optimization framework, able to consider both perception and action objectives and couple them through the system dynamics, is therefore necessary. Our perception-aware model predictive control framework works in a receding-horizon fashion by iteratively solving a non-linear optimization problem. It is capable of running in real-time, fully onboard our lightweight, small-scale quadrotor using a low-power ARM computer, to- gether with a visual-inertial odometry pipeline. We validate our approach in experiments demonstrating (I) the contradiction between perception and action objectives, and (II) improved behavior in extremely challenging lighting conditions

    Visual servoing of aerial manipulators

    Get PDF
    The final publication is available at link.springer.comThis chapter describes the classical techniques to control an aerial manipulator by means of visual information and presents an uncalibrated image-based visual servo method to drive the aerial vehicle. The proposed technique has the advantage that it contains mild assumptions about the principal point and skew values of the camera, and it does not require prior knowledge of the focal length, in contrast to traditional image-based approaches.Peer ReviewedPostprint (author's final draft

    Positioning a camera with respect to planar objects of unknown shape by coupling 2D visual servoing and 3D estimations

    Get PDF
    International audienceThis paper proposes a way to achieve positioning tasks by 2d visual servoing, when the desired image of the observed object cannot be precisely described. The object is assumed to be planar and motionless but no knowledge about its shape or pose is required. First, we treat the case of a threadlike object and then we show how our approach can be generalized to an object with three particular points. The control law is based on the use of 2d visual servoing and on an estimation of two 3d parameters. We show that this control scheme is not sensitive to the calibration of the camera. We conclude this paper by experimental results relative to objects of unknown shape. In addition, an algorithm to estimate the depth between the object and the camera is provided which finally leads a 3d estimation of the object shape

    Rotation Free Active Vision

    Get PDF
    International audience— Incremental Structure from Motion (SfM) algorithms require, in general, precise knowledge of the camera linear and angular velocities in the camera frame for estimating the 3D structure of the scene. Since an accurate measurement of the camera own motion may be a non-trivial task in several robotics applications (for instance when the camera is onboard a UAV), we propose in this paper an active SfM scheme fully independent from the camera angular velocity. This is achieved by considering, as visual features, some rotational invariants obtained from the projection of the perceived 3D points onto a virtual unitary sphere (unified camera model). This feature set is then exploited for designing a rotation-free active SfM algorithm able to optimize online the direction of the camera linear velocity for improving the convergence of the structure estimation task. As case study, we apply our framework to the depth estimation of a set of 3D points and discuss several simulations and experimental results for illustrating the approach

    Sensorless torque/force control

    Get PDF
    Motion control systems represent a main subsystem for majority of processing systems that can be found in the industrial sector. These systems are concerned with the actuation of all devices in the manufacturing process such as machines, robots, conveyor systems and pick and place mechanisms such that they satisfy certain motion requirements, e.g., the pre specified reference trajectories are followed along with delivering the proper force or torque to the point of interest at which the process occurs. In general, the aim of force/torque control is to impose the desired force on the environment even if the environment has dynamical motion

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Robotic execution for everyday tasks by means of external vision/force control

    Get PDF
    In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are pre- sented
    • 

    corecore