448 research outputs found

    Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters

    Full text link
    Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201

    Novel Hyperacute Gimbal Eye for Implementing Precise Hovering and Target Tracking on Board a Quadrotor

    No full text
    International audienceThis paper presents a new minimalist bio-inspired artificial eye of only 24 pixels, able to locate accurately a target placed in its small field of view (10°). The eye is mounted on a very light custom-made gimbal system which makes the eye able to track faithfully a moving target. We have shown here, that our gimbal eye can be embedded onboard a small quadrotor to achieve accurate hovering with respect to a target placed onto the ground. Our aiborne oculomotor system was enhanced with a bio-inspired reflexe in charge to lock efficiently the robot’s gaze onto a target and compensate for the robot’s rotations and disturbances. The use of very few pixels allowed to implement a visual processing algorithm at a refresh rate as high as such as 400 Hz. This high refresh rate coupled to a very fast control of the eye’s orientation allowed the robot to track efficiently a target moving at a speed up to 200°/s

    Steering by Gazing: An Efficient Biomimetic Control Strategy for Visually-guided Micro-Air Vehicles

    No full text
    International audienceOSCAR 2 is a twin-engine aerial demonstrator equipped with a monocular visual system, which manages to keep its gaze and its heading steadily fixed on a target (a dark edge or a bar) in spite of the severe random perturbations applied to its body via a ducted fan. The tethered robot stabilizes its gaze on the basis of two Oculomotor Reflexes (ORs) inspired by studies on animals: - a Visual Fixation Reflex (VFR) - a Vestibulo-ocular Reflex (VOR) One of the key features of this robot is the fact that the eye is decoupled mechanically from the body about the vertical (yaw) axis. To meet the conflicting requirements of high accuracy and fast ocular responses, a miniature (2.4-gram) Voice Coil Motor (VCM) was used, which enables the eye to make a change of orientation within an unusually short rise time (19ms). The robot, which was equipped with a high bandwidth (7Hz) "Vestibulo-Ocular Reflex (VOR)" based on an inertial micro-rate gyro, is capable of accurate visual fixation as long as there is light. The robot is also able to pursue a moving target in the presence of erratic gusts of wind. Here we present the two interdependent control schemes driving the eye in the robot and the robot in space without any knowledge of the robot's angular position. This "steering by gazing" control strategy implemented on this lightweight (100-gram) miniature aerial robot demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    Humans use Optokinetic Eye Movements to Track Waypoints for Steering

    Get PDF
    It is well-established how visual stimuli and self-motion in laboratory conditions reliably elicit retinal image stabilizing compensatory eye movements (CEM). Their organization and roles in natural-task gaze strategies is much less understood: are CEM applied in active sampling of visual information in human locomotion in the wild? If so, how? And what are the implications for guidance? Here, we directly compare gaze behavior in the real world (driving a car) and a fixed base simulation steering task. A strong and quantifiable correspondence between self-rotation and CEM counter-rotation is found across a range of speeds. This gaze behavior is “optokinetic”, i.e. optic flow is a sufficient stimulus to spontaneously elicit it in naïve subjects and vestibular stimulation or stereopsis are not critical. Theoretically, the observed nystagmus behavior is consistent with tracking waypoints on the future path, and predicted by waypoint models of locomotor control - but inconsistent with travel point models, such as the popular tangent point model.It is well-established how visual stimuli and self-motion in laboratory conditions reliably elicit retinal-image-stabilizing compensatory eye movements (CEM). Their organization and roles in natural-task gaze strategies is much less understood: are CEM applied in active sampling of visual information in human locomotion in the wild? If so, how? And what are the implications for guidance? Here, we directly compare gaze behavior in the real world (driving a car) and a fixed base simulation steering task. A strong and quantifiable correspondence between self-rotation and CEM counter-rotation is found across a range of speeds. This gaze behavior is “optokinetic”, i.e. optic flow is a sufficient stimulus to spontaneously elicit it in naïve subjects and vestibular stimulation or stereopsis are not critical. Theoretically, the observed nystagmus behavior is consistent with tracking waypoints on the future path, and predicted by waypoint models of locomotor control - but inconsistent with travel point models, such as the popular tangent point model.Peer reviewe

    Decoupling the Eye: A Key toward a Robust Hovering for Sighted Aerial Robots

    No full text
    International audienceInspired by natural visual systems where gaze stabilization is at a premium, we simulated an aerial robot with a decoupled eye to achieve more robust hovering above a ground target despite strong lateral and rotational disturbances. In this paper, two different robots are compared for the same disturbances and displacements. The first robot is equipped with a fixed eye featuring a large field-of-view (FOV) and the second robot is endowed with a decoupled eye featuring a small FOV (about ±5°). Even if this mechanical decoupling increases the mechanical complexity of the robot, this study demonstrates that disturbances are rejected faster and the computational complexity is clearly decreased. Thanks to bio-inspired visuo-motor reflexes, the decoupled eye robot is able to hold its gaze locked onto a distant target and to reject strong disturbances by profiting of the small inertia of the decoupled eye

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    Identifying Head-Trunk and Lower Limb Contributions to Gaze Stabilization During Locomotion

    Get PDF
    The goal of the present study was to determine how the multiple, interdependent full-body sensorimotor subsystems respond to a change in gaze stabilization task constraints during locomotion. Nine subjects performed two gaze stabilization tasks while walking at 6.4 km/hr on a motorized treadmill: 1) focusing on a central point target; 2) reading numeral characters; both presented at 2m in front at the level of their eyes. While subjects performed the tasks we measured: temporal parameters of gait, full body sagittal plane segmental kinematics of the head, trunk, thigh, shank and foot, accelerations along the vertical axis at the head and the shank, and the vertical forces acting on the support surface. We tested the hypothesis that with the increased demands placed on visual acuity during the number recognition task, subjects would modify full-body segmental kinematics in order to reduce perturbations to the head in order to successfully perform the task. We found that while reading numeral characters as - compared to the central point target: 1) compensatory head pitch movement was on average 22% greater despite the fact that the trunk pitch and trunk vertical translation movement control were not significantly changed; 2) coordination patterns between head and trunk as reflected by the peak cross correlation between the head pitch and trunk pitch motion as well as the peak cross correlation between the head pitch and vertical trunk translation motion were not significantly changed; 3) knee joint total movement was on average 11% greater during the period from the heel strike event to the peak knee flexion event in stance phase of the gait cycle; 4) peak acceleration measured at the head was significantly reduced by an average of 13% in four of the six subjects. This was so even when the peak acceleration at the shank and the transmissibility of the shock wave at heel strike (measured by the peak acceleration ratio of the head/shank) remained unchanged. Taken together these results provide further evidence that the full body contributes to gaze stabilization during locomotion, and that its different functional elements can be modified online to contribute to gaze stabilization for different visual task constraints

    Differences in gaze anticipation for locomotion with and without vision

    Get PDF
    International audiencePrevious experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link
    corecore