276 research outputs found
Contributions to shared control and coordination of single and multiple robots
Lâensemble des travaux prĂ©sentĂ©s dans cette habilitation traite de l'interface entre un d'un opĂ©rateur humain avec un ou plusieurs robots semi-autonomes aussi connu comme le problĂšme du « contrĂŽle partagĂ© ».Le premier chapitre traite de la possibilitĂ© de fournir des repĂšres visuels / vestibulaires Ă un opĂ©rateur humain pour la commande Ă distance de robots mobiles.Le second chapitre aborde le problĂšme, plus classique, de la mise Ă disposition Ă lâopĂ©rateur dâindices visuels ou de retour haptique pour la commande dâun ou plusieurs robots mobiles (en particulier pour les drones quadri-rotors).Le troisiĂšme chapitre se concentre sur certains des dĂ©fis algorithmiques rencontrĂ©s lors de l'Ă©laboration de techniques de coordination multi-robots.Le quatriĂšme chapitre introduit une nouvelle conception mĂ©canique pour un drone quadrirotor sur-actionnĂ© avec pour objectif de pouvoir, Ă terme, avoir 6 degrĂ©s de libertĂ© sur une plateforme quadrirotor classique (mais sous-actionnĂ©).Enfin, le cinquiĂšme chapitre prĂ©sente une cadre gĂ©nĂ©ral pour la vision active permettant, en optimisant les mouvements de la camĂ©ra, lâoptimisation en ligne des performances (en terme de vitesse de convergence et de prĂ©cision finale) de processus dâestimation « basĂ©s vision »
Rotation Free Active Vision
International audienceâ Incremental Structure from Motion (SfM) algorithms require, in general, precise knowledge of the camera linear and angular velocities in the camera frame for estimating the 3D structure of the scene. Since an accurate measurement of the camera own motion may be a non-trivial task in several robotics applications (for instance when the camera is onboard a UAV), we propose in this paper an active SfM scheme fully independent from the camera angular velocity. This is achieved by considering, as visual features, some rotational invariants obtained from the projection of the perceived 3D points onto a virtual unitary sphere (unified camera model). This feature set is then exploited for designing a rotation-free active SfM algorithm able to optimize online the direction of the camera linear velocity for improving the convergence of the structure estimation task. As case study, we apply our framework to the depth estimation of a set of 3D points and discuss several simulations and experimental results for illustrating the approach
On the Stability of Gated Graph Neural Networks
In this paper, we aim to find the conditions for input-state stability (ISS)
and incremental input-state stability (ISS) of Gated Graph Neural
Networks (GGNNs). We show that this recurrent version of Graph Neural Networks
(GNNs) can be expressed as a dynamical distributed system and, as a
consequence, can be analysed using model-based techniques to assess its
stability and robustness properties. Then, the stability criteria found can be
exploited as constraints during the training process to enforce the internal
stability of the neural network. Two distributed control examples, flocking and
multi-robot motion control, show that using these conditions increases the
performance and robustness of the gated GNNs
Active Structure from Motion for Spherical and Cylindrical Targets
International audienceStructure estimation from motion (SfM) is a classical and well-studied problem in computer and robot vision, and many solutions have been proposed to treat it as a recursive filtering/estimation task. However, the issue of actively optimizing the transient response of the SfM estimation error has not received a comparable attention. In this paper, we provide an experimental validation of a recently proposed nonlinear active SfM strategy via two concrete applications: 3D structure estimation for a spherical and a cylindrical target. The experimental results fully support the theoretical analysis and clearly show the benefits of the proposed active strategy. Indeed, by suitably acting on the camera motion and estimation gains, it is possible to assign the error transient response and make it equivalent to that of a reference linear second-order system with desired poles
Controller and Trajectory Optimization for a Quadrotor UAV with Parametric Uncertainty
In this work, we exploit the recent notion of closed-loop state sensitivity to critically compare three typical controllers for a quadrotor UAV with the goal of evaluating the impact of controller choice, gain tuning and shape of the reference trajectory in minimizing the sensitivity of the closed-loop system against uncertainties in the model parameters. To this end, we propose a novel optimization problem that takes into account both the shape of the reference trajectory and the controller gains. We then run a large statistical campaign for comparing the performance of the three controllers which provides some interesting insight for the goal of increasing closed-loop robustness against parametric uncertainties.</p
Predicting direction detection thresholds for arbitrary translational acceleration profiles in the horizontal plane
In previous research, direction detection thresholds have been measured and successfully modeled by exposing participants to sinusoidal acceleration profiles of different durations. In this paper, we present measurements that reveal differences in thresholds depending not only on the duration of the profile, but also on the actual time course of the acceleration. The measurements are further explained by a model based on a transfer function, which is able to predict direction detection thresholds for all types of acceleration profiles. In order to quantify a participantâs ability to detect the direction of motion in the horizontal plane, a four-alternative forced-choice task was implemented. Three types of acceleration profiles (sinusoidal, trapezoidal and triangular) were tested for three different durations (1.5, 2.36 and 5.86Â s). To the best of our knowledge, this is the first study which varies both quantities (profile and duration) in a systematic way within a single experiment. The lowest thresholds were found for trapezoidal profiles and the highest for triangular profiles. Simulations for frequencies lower than the ones actually measured predict a change from this behavior: Sinusoidal profiles are predicted to yield the highest thresholds at low frequencies. This qualitative prediction is only possible with a model that is able to predict thresholds for different types of acceleration profiles. Our modeling approach represents an important advancement, because it allows for a more general and accurate description of perceptual thresholds for simple and complex translational motions
Learning the Shape of Image Moments for Optimal 3D Structure Estimation
International audienceâ The selection of a suitable set of visual features for an optimal performance of closed-loop visual control or Structure from Motion (SfM) schemes is still an open problem in the visual servoing community. For instance, when considering integral region-based features such as image moments, only heuristic, partial, or local results are currently available for guiding the selection of an appropriate moment set. The goal of this paper is to propose a novel learning strategy able to automatically optimize online the shape of a given class of image moments as a function of the observed scene for improving the SfM performance in estimating the scene structure. As case study, the problem of recovering the (unknown) 3D parameters of a planar scene from measured moments and known camera motion is considered. The reported simulation results fully confirm the soundness of the approach and its superior performance over more consolidated solutions in increasing the information gain during the estimation task
An Active Strategy for Plane Detection and Estimation with a Monocular Camera
International audiencePlane detection and estimation from visual data is a classical problem in robotic vision. In this work we propose a novel active strategy in which a monocular camera tries to determine whether a set of observed point features belongs to a common plane, and, if so, what are the associated plane parameters. The active component of the strategy imposes an optimized camera motion (as a function of the observed scene) able to maximize the convergence in estimating the scene structure. Based on this strategy, two methods are then proposed to solve the plane estimation task: a classical solution exploiting the homography constraint (and, thus, almost com- pletely based on image correspondances across distant frames), and an alternative method fully taking advantage of the scene structure estimated incrementally during the camera motion. The two methods are extensively compared in several case studies by discussing the various pros/cons
Robust Trajectory Planning with Parametric Uncertainties
International audienceIn this paper we extend the previously introduced notion of closed-loop state sensitivity by introducing the concept of input sensitivity and by showing how to exploit it in a trajectory optimization framework. This allows to generate an optimal reference trajectory for a robot that minimizes the state and input sensitivities against uncertainties in the model parameters, thus producing inherently robust motion plans. We parametrize the reference trajectories with BĂ©ziers curves and discuss how to consider linear and nonlinear constraints in the optimization process (e.g., input saturations). The whole machinery is validated via an extensive statistical campaign that clearly shows the interest of the proposed methodology
Modeling direction discrimination thresholds for yaw rotations around an earth-vertical axis for arbitrary motion profiles
Understanding the dynamics of vestibular perception is important, for example, for improving the realism of motion simulation and virtual reality environments or for diagnosing patients suffering from vestibular problems. Previous research has found a dependence of direction discrimination thresholds for rotational motions on the period length (inverse frequency) of a transient (single cycle) sinusoidal acceleration stimulus. However, self-motion is seldom purely sinusoidal, and up to now, no models have been proposed that take into account non-sinusoidal stimuli for rotational motions. In this work, the influence of both the period length and the specific time course of an inertial stimulus is investigated. Thresholds for three acceleration profile shapes (triangular, sinusoidal, and trapezoidal) were measured for three period lengths (0.3, 1.4, and 6.7Â s) in ten participants. A two-alternative forced-choice discrimination task was used where participants had to judge if a yaw rotation around an earth-vertical axis was leftward or rightward. The peak velocity of the stimulus was varied, and the threshold was defined as the stimulus yielding 75Â % correct answers. In accordance with previous research, thresholds decreased with shortening period length (from ~2Â deg/s for 6.7Â s to ~0.8Â deg/s for 0.3Â s). The peak velocity was the determining factor for discrimination: Different profiles with the same period length have similar velocity thresholds. These measurements were used to fit a novel model based on a description of the firing rate of semi-circular canal neurons. In accordance with previous research, the estimates of the model parameters suggest that velocity storage does not influence perceptual thresholds
- âŠ