31 research outputs found

    One Click Focus with Eye-in-hand/Eye-to-hand Cooperation

    Get PDF
    A critical assumption of many multi-view control systems is the initial visibility of the regions of interest from all the views. An initialization step is proposed for a hybrid eye-in-hand/eye-to-hand grasping system to overcome this requirement. In this paper, the object of interest is assumed to be within the eye-to-hand field of view, whereas it may not be within the eye-in-hand one. The object model is unknown and no database is used. The object lies in a complex scene with a cluttered background. A method to automatically focus the object of interest is presented, tested and validated on a multi view robotic system

    SUCRe: Leveraging Scene Structure for Underwater Color Restoration

    Full text link
    Underwater images are altered by the physical characteristics of the medium through which light rays pass before reaching the optical sensor. Scattering and wavelength-dependent absorption significantly modify the captured colors depending on the distance of observed elements to the image plane. In this paper, we aim to recover an image of the scene as if the water had no effect on light propagation. We introduce SUCRe, a novel method that exploits the scene's 3D structure for underwater color restoration. By following points in multiple images and tracking their intensities at different distances to the sensor, we constrain the optimization of the parameters in an underwater image formation model and retrieve unattenuated pixel intensities. We conduct extensive quantitative and qualitative analyses of our approach in a variety of scenarios ranging from natural light to deep-sea environments using three underwater datasets acquired from real-world scenarios and one synthetic dataset. We also compare the performance of the proposed approach with that of a wide range of existing state-of-the-art methods. The results demonstrate a consistent benefit of exploiting multiple views across a spectrum of objective metrics. Our code is publicly available at https://github.com/clementinboittiaux/sucre

    Feet and legs tracking using a smart rollator equipped with a Kinect

    Get PDF
    International audienceClinical evaluation of frailty in the elderly is the first step to decide the degree of assistance they require. Advances in robotics make it possible to turn a standard assistance device into an augmented device that may enrich the existing tests with new sets of daily measured criteria. In this paper we use a standard 4 wheeled rollator, equipped with a Kinect and odometers, for biomechanical gait analysis. This paper focuses on the method we develop to measure and estimate legs and feet position during an assisted walk. The results are compared with motion capture data, as a ground truth. Preliminary results obtained on four healthy persons show that relevant data can be extracted for gait analysis. Some criteria are accurate with regards to the ground truth, eg. foot orientation and ankle angle

    A new application of smart walker for quantitative analysis of human walking

    Get PDF
    International audienceThis paper presents a new nonintrusive device for everyday gait analysis and health monitoring. The system is a standard rollator equipped with encoders and inertial sensors. The assisted walking of 25 healthy elderly and 23 young adults are compared to develop walking quality index. The subjects were asked to walk on a straight trajectory and an L-shaped trajectory respectively. The walking trajectory, which is missing in other gait analysis methods, is calculated based on the encoder data. The obtained trajectory and steps are compared with the results of a motion capture system. The gait analysis results show that new index obtained by using the walker measurements, and not available otherwise, are very discriminating, e.g., the elderly have larger lateral motion and maneuver area, smaller angular velocity during turning, their walking accuracy is lower and turning ability is weaker although they have almost the same walking velocity as the young people

    Localisation et caractérisation d'objets inconnus à partir d'informations visuelles : vers une saisie intuitive pour les personnes en situation de handicap

    No full text
    The starting point of this study is the development of a robot assistant for the disabled. The robot is a vision-based controlled manipulator which is equipped with two cameras : one is embedded on the gripper and gives a close view of the scene while the second one is remotely located and gives a global view of the scene. The objective is then to grasp an a priori unknown object given the only information of one clic on the remote image. We present methods to roughly localize the object and estimate the characteristics needed for grasping. Our work may be seen as an alternative to the grasping procedures that are using a previously built data-base. The thesis is divided in two parts : the rough object localization and the estimation of its characteristics. Given the coordinates of the clicked point, the object is known to be on the view line which connects both the remote camera optical center and the clicked point. The projection of this view line on the gripper image plane is the epipolar line associated with the clicked point. Epipolar based visual servoing is used to control the embedded camera to scan this line. Image characteristics are extracted from both the remote and the gripper view and then matched to estimate the 3D position of the object. This method holds the advantage of being robust to object motion in the remote frame. At the end of the localization process, the object is included in both ﰜelds of view and the estimation of the characteristics is initialized. The object rough shape estimation is treated with a monocular mobile camera. The object shape is approximated by a quadric whose parameters are estimated from the object projection on a set of images. The object is segmented using an active contour method that is initialized using the output of the localization process. The better the viewpoints, the more accurate the characteristics estimation. Finally, a active vision method is developed to automatically select the viewpoints that improve reconstruction. The best views are chosen in order to maximize the new information.Le point de dĂ©part des travaux prĂ©sentĂ©s dans cette thĂšse est la volontĂ© de dĂ©veloppement d'une aide robotisĂ©e Ă  la saisie intuitive pour les personnes en situation de handicap. L'outil proposĂ© est un manipulateur contrĂŽlĂ© en utilisant directement les informations transmises par deux camĂ©ras, l'une embarquĂ©e sur la pince qui donne une vue dĂ©taillĂ©e de la scĂšne, et l'autre dĂ©portĂ©e qui en oﰛre une vue d'ensemble. L'objet de nos travaux est de saisir un objet a priori inconnu Ă  partir d'un seul clic de l'utilisateur sur une image acquise par la camĂ©ra dĂ©portĂ©e. Nous proposons des mĂ©thodes permettant de localiser et de caractĂ©riser grossiĂšrement un objet de forme convexe aﰜn qu'il puisse ĂȘtre saisi par la pince du manipulateur. Cette thĂšse peut ĂȘtre vue comme complĂ©mentaire aux mĂ©thodes existantes reposant sur l'utilisation de bases de donnĂ©es. Ce manuscrit est divisĂ© en deux parties : la localisation grossiĂšre d'un objet inconnu et la caractĂ©risation de sa forme. L'objet se situe sur la ligne de vue qui passe par le centre optique de la camĂ©ra dĂ©portĂ©e et le clic. La projection de cette ligne de vue dans la camĂ©ra embarquĂ©e est la ligne Ă©pipolaire associĂ©e aux clic. Nous proposons donc un asservissement visuel reposant sur l'utilisation de la gĂ©omĂ©trie Ă©pipolaire pour commander la camĂ©ra embarquĂ©e de façon Ă  parcourir cette ligne. Les indices visuels extraits des images embarquĂ©es sont ensuite mis en correspondance avec les indices dĂ©tectĂ©s au voisinage du clic pour estimer la position 3D de l'objet. Cette mĂ©thode est robuste Ă  des mouvements relatifs de l'objet et de la camĂ©ra dĂ©portĂ©e au cours du processus de localisation. En ﰜn de processus, l'objet dĂ©signĂ© se trouve dans le champ de vision des deux camĂ©ras et ces deux vues peuvent servir Ă  initier une caractĂ©risation plus prĂ©cise de l'objet et suﰞsante pour la saisie. Le problĂšme de la caractĂ©risation de la forme de l'objet a Ă©tĂ© traitĂ© dans le cadre d'une observation monoculaire dynamique. La forme de l'objet est modĂ©lisĂ©e par une quadrique dont les paramĂštres sont estimĂ©s Ă  partir de ses projections dans un ensemble d'images. Les contours de l'objet sont dĂ©tectĂ©s par une mĂ©thode de contours actifs initialisĂ©s Ă  partir de la localisation grossiĂšre de l'objet. La caractĂ©risation de l'objet est d'autant plus prĂ©cise que les vues utilisĂ©es pour l'estimer sont bien choisies. La derniĂšre contribution de ce mĂ©moire est une mĂ©thode de sĂ©lection par vision active des vues optimales pour la reconstruction. Les meilleures vues sont choisies en recherchant les positions de la camĂ©ra qui maximisent l'information

    Can smart rollators be used for gait monitoring and fall prevention?

    No full text
    International audienceClinical evaluation of frailty in the elderly is thefirst step to decide the degree of assistance they require. Thisevaluation is usually performed once and for all by fillingstandard forms with macro-information about standing andwalking abilities. Advances in robotics make it possible toturn a standard assistance device into an augmented device.The existing tests could then be enriched by a new set ofdaily measured criteria derived from the daily use of standardassistance devices. This paper surveys existing Smart Walkerto figure out whether they can be used for gait monitoringand frailty evaluation, focusing on the user-system interaction.Biomechanical gait analysis methods are presented and comparedto robotics system designs, to highlight their convergencesand differences. On the one hand, monitoring devices try toestimate accurately biomechanical features, whereas, on theother hand, walking assistance and fall prevention do notsystematically rely on an accurate human model and preferheuristics on the user-robot state

    ROV localization using ballasted umbilical equipped with IMUs

    No full text
    This article describes an affordable and setupfriendly cable-based localization technique for remotely operated vehicles (ROVs), which exploits the piecewise linear shape of the umbilical being equipped with a sliding ballast. Each stretched part of the cable is instrumented with a waterproof IMU to measure its orientation. Using cable's geometry, the vehicle's location can be calculated in relation to the other fixed or moving end of the cable. Experiments carried out with a robotic system in a water tank prove the reliability of this localization strategy. The study investigates the influence of measurement uncertainties on cable orientation and length, as well as the impact of the IMU location along the cable on localization precision. The accuracy of the localization method is discussed
    corecore