574 research outputs found
Visual servoing from three points using a spherical projection model
International audienceThis paper deals with visual servoing from three points. Using the geometric properties of the spherical projection of points, a new decoupled set of six visual features is proposed. The main originality lies in the use of the distances between spherical projection of points to define three features that are invariant to camera rotations. The three other features present a linear link with respect to camera rotations. In comparison with the classical perspective coordinates of points, the new decoupled set does not present more singularities. In addition, using the new set in its non-singular domain, a classical control law is proven to be ideal for rotational motions. These theoretical results as well as the robustness to errors of the new decoupled control scheme are illustrated through simulation results
Distance-based and Orientation-based Visual Servoing from Three Points
International audienceThis paper is concerned with the use of a spherical-projection model for visual servoing from three points. We propose a new set of six features to control a 6-degree-of-freedom (DOF) robotic system with good decoupling properties. The first part of the set consists of three invariants to camera rotations. These invariants are built using the Cartesian distances between the spherical projections of the three points. The second part of the set corresponds to the angle-axis representation of a rotation matrix measured from the image of two points. Regarding the theoretical comparison with the classical perspective coordinates of points, the new set does not present more singularities. In addition, using the new set inside its nonsingular domain, a classical control law is proven to be optimal for pure rotational motions. The theoretical results and the robustness to points range errors of the new control scheme are validated through simulations and experiments on a 6-DOF robot arm
Generic decoupled image-based visual servoing for cameras obeying the unified projection model
In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform
Using a 3DOF Parallel Robot and a Spherical Bat to hit a Ping-Pong Ball
Playing the game of Ping-Pong is a challenge to human abilities since it requires developing skills, such as fast reaction capabilities, precision of movement and high speed mental responses. These processes include the utilization of seven DOF of the human arm, and translational movements through the legs, torso, and other extremities of the body, which are used for developing different game strategies or simply imposing movements that affect the ball such as spinning movements. Computationally, Ping-Pong requires a huge quantity of joints and visual information to be processed and analysed, something which really represents a challenge for a robot. In addition, in order for a robot to develop the task mechanically, it requires a large and dexterous workspace, and good dynamic capacities. Although there are commercial robots that are able to play Ping-Pong, the game is still an open task, where there are problems to be solved and simplified. All robotic Ping-Pong players cited in the bibliography used at least four DOF to hit the ball. In this paper, a spherical bat mounted on a 3-DOF parallel robot is proposed. The spherical bat is used to drive the trajectory of a Ping-Pong ball.Fil: Trasloheros, Alberto. Universidad AeronĂĄutica de QuerĂ©taro; MĂ©xicoFil: SebastiĂĄn, JosĂ© MarĂa. Universidad PolitĂ©cnica de Madrid; España. Consejo Superior de Investigaciones CientĂficas; EspañaFil: Torrijos, JesĂșs. Consejo Superior de Investigaciones CientĂficas; España. Universidad PolitĂ©cnica de Madrid; EspañaFil: Carelli Albarracin, Ricardo Oscar. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - San Juan. Instituto de AutomĂĄtica. Universidad Nacional de San Juan. Facultad de IngenierĂa. Instituto de AutomĂĄtica; ArgentinaFil: Roberti, Flavio. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - San Juan. Instituto de AutomĂĄtica. Universidad Nacional de San Juan. Facultad de IngenierĂa. Instituto de AutomĂĄtica; Argentin
Photometric visual servoing for omnidirectional cameras
International audience2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom (dof) robot and with a catadioptric camera for a mobile robot navigation task
Effective Target Aware Visual Navigation for UAVs
In this paper we propose an effective vision-based navigation method that
allows a multirotor vehicle to simultaneously reach a desired goal pose in the
environment while constantly facing a target object or landmark. Standard
techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual
Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast
maneuvers) do not allow to constantly maintain the line of sight with a target
of interest. Instead, we compute the optimal trajectory by solving a non-linear
optimization problem that minimizes the target re-projection error while
meeting the UAV's dynamic constraints. The desired trajectory is then tracked
by means of a real-time Non-linear Model Predictive Controller (NMPC): this
implicitly allows the multirotor to satisfy both the required constraints. We
successfully evaluate the proposed approach in many real and simulated
experiments, making an exhaustive comparison with a standard approach.Comment: Conference paper at "European Conference on Mobile Robotics" (ECMR)
201
An Innovative Mission Management System for Fixed-Wing UAVs
This paper presents two innovative units linked together to build the main frame of a UAV Mis- sion Management System. The first unit is a Path Planner for small UAVs able to generate optimal paths in a tridimensional environment, generat- ing flyable and safe paths with the lowest com- putational effort. The second unit is the Flight Management System based on Nonlinear Model Predictive Control, that tracks the reference path and exploits a spherical camera model to avoid unpredicted obstacles along the path. The control system solves on-line (i.e. at each sampling time) a finite horizon (state horizon) open loop optimal control problem with a Genetic Algorithm. This algorithm finds the command sequence that min- imizes the tracking error with respect to the ref- erence path, driving the aircraft far from sensed obstacles and towards the desired trajectory
Rotation Free Active Vision
International audienceâ Incremental Structure from Motion (SfM) algorithms require, in general, precise knowledge of the camera linear and angular velocities in the camera frame for estimating the 3D structure of the scene. Since an accurate measurement of the camera own motion may be a non-trivial task in several robotics applications (for instance when the camera is onboard a UAV), we propose in this paper an active SfM scheme fully independent from the camera angular velocity. This is achieved by considering, as visual features, some rotational invariants obtained from the projection of the perceived 3D points onto a virtual unitary sphere (unified camera model). This feature set is then exploited for designing a rotation-free active SfM algorithm able to optimize online the direction of the camera linear velocity for improving the convergence of the structure estimation task. As case study, we apply our framework to the depth estimation of a set of 3D points and discuss several simulations and experimental results for illustrating the approach
Dynamic Object Tracking for Quadruped Manipulator with Spherical Image-Based Approach
Exactly estimating and tracking the motion of surrounding dynamic objects is
one of important tasks for the autonomy of a quadruped manipulator. However,
with only an onboard RGB camera, it is still a challenging work for a quadruped
manipulator to track the motion of a dynamic object moving with unknown and
changing velocities. To address this problem, this manuscript proposes a novel
image-based visual servoing (IBVS) approach consisting of three elements: a
spherical projection model, a robust super-twisting observer, and a model
predictive controller (MPC). The spherical projection model decouples the
visual error of the dynamic target into linear and angular ones. Then, with the
presence of the visual error, the robustness of the observer is exploited to
estimate the unknown and changing velocities of the dynamic target without
depth estimation. Finally, the estimated velocity is fed into the model
predictive controller (MPC) to generate joint torques for the quadruped
manipulator to track the motion of the dynamical target. The proposed approach
is validated through hardware experiments and the experimental results
illustrate the approach's effectiveness in improving the autonomy of the
quadruped manipulator
- âŠ