11 research outputs found

    Controlling docking, altitude and speed in a circular high-roofed tunnel thanks to the optic flow

    No full text
    International audienceThe new robot called BeeRotor we have developed is a tandem rotorcraft that mimicks optic flow-based behaviors previously observed in flies and bees. This tethered miniature robot (80g), which is autonomous in terms of its computational power requirements, is equipped with a 13.5-g quasi-panoramic visual system consisting of 4 individual visual motion sensors responding to the optic flow generated by photographs of natural scenes, thanks to the bio-inspired "time of travel" scheme. Based on recent findings on insects' sensing abilities and control strategies, the BeeRotor robot was designed to use optic flow to perform complex tasks such as ground and ceiling following while also automatically driving its forward speed on the basis of the ventral or dorsal optic flow. In addition, the BeeRotor robot can perform tricky manoeuvers such as automatic ceiling docking by simply regulating its dorsal or ventral optic flow in high-roofed tunnel depicting natural scenes. Although it was built as a proof of concept, the BeeRotor robot is one step further towards achieving a fully- autonomous micro-helicopter which is capable of navigating mainly on the basis of the optic flow

    A direct visual servoing scheme for automatic nanopositioning.

    Get PDF
    International audienceThis paper demonstrates an accurate nanopositioning scheme based on a direct visual servoing process. This technique uses only the pure image signal (photometric information) to design the visual servoing control law. With respect to traditional visual servoing approaches that use geometric visual features (points, lines ...), the visual features used in the control law is the pixel intensity. The proposed approach has been tested in term of accuracy and robustness in several experimental conditions. The obtained results have demonstrated a good behavior of the control law and very good positioning accuracy. The obtained accuracies are 89 nm, 14 nm, and 0.001 degrees in the x, y and axes of a positioning platform, respectively

    A Robust Docking Strategy for a Mobile Robot Using Flow Field Divergence

    Full text link

    A Comparison of Three Methods for Measure of Time to Contact

    Get PDF
    International audienceTime to Contact (TTC) is a biologically inspired method for obstacle detection and reactive control of motion that does not require scene reconstruction or 3D depth estimation. Estimating TTC is difficult because it requires a stable and reliable estimate of the rate of change of distance between image features. In this paper we propose a new method to measure time to contact, Active Contour Affine Scale (ACAS). We experimentally and analytically compare ACAS with two other recently proposed methods: Scale Invariant Ridge Segments (SIRS), and Image Brightness Derivatives (IBD). Our results show that ACAS provides a more accurate estimation of TTC when the image flow may be approximated by an affine transformation, while SIRS provides an estimate that is generally valid, but may not always be as accurate as ACAS, and IBD systematically over-estimate time to contact

    Using robust estimation for visual servoing based on dynamic vision

    Get PDF
    International audienceThe aim of this article is to achieve accurate visual servoing tasks when the shape of the object being observed as well as the final image are unknown. More precisely, we want to control the orientation of the tangent plane at a certain point on the object corresponding to the center of a region of interest and to move this point to the principal point to fulfill a fixation task. To do that, we perform a 3D reconstruction phase during the servoing. It is based on the measurement of the 2D displacement in the region of interest and on the measurement of the camera velocity. Since the 2D displacement depends on the scene, we introduce an unified motion model to deal with planar as well with non-planar objects. Unfortunately, this model is only an approximation. Thus, we propose to use robust estimation techniques and a 3D reconstruction based on discrete approach. Experimental results compare both approaches

    Estimation du Temps Ă  Collision en Vision Catadioptrique

    Get PDF
    Time to contact or time to collision (TTC) is of utmost importance information for animals as well as for mobile robots because it enables them to avoid obstacles; it is a convenient way to analyze the surrounding environment. The problem of TTC estimation is largely discussed in perspective images. Although a lot of works have shown the interest of omnidirectional camera for robotic applications such as localization, motion, monitoring, few works use omnidirectional images to compute the TTC. In this thesis, we show that TTC can be also estimated on catadioptric images. We present two approaches for TTC estimation using directly or indirectly the optical flow based on de-rotation strategy. The first, called ''gradient based TTC'', is simple, fast and it does not need an explicit estimation of the optical flow. Nevertheless, this method cannot provide a TTC on each pixel, valid only for para-catadioptric sensors and requires an initial segmentation of the obstacle.The second method, called ''TTC map estimation based on optical flow'', estimates TTC on each point on the image and provides the depth map of the environment for any obstacle in any direction and is valid for all central catadioptric sensors. Some results and comparisons in synthetic and real images will be given.Cette thèse s'intéresse à l'estimation du temps à collision d'un robot mobile muni d'une caméra catadioptrique. Ce type de caméra est très utile en robotique car il permet d'obtenir un champ de vue panoramique à chaque instant. Le temps de collision a été largement étudié dans le cas des caméras perspectives. Cependant, ces méthodes ne sont pas directement applicables et nécessitent d'être adaptées, à cause des distorsions des images obtenues par les caméras omnidirectionnelles. Dans ce travail, nous proposons d'exploiter explicitement et implicitement le flot optique calculé sur les images omnidirectionnelles pour en déduire le temps à collision (TTC) entre le robot et l'obstacle. Nous verrons que la double projection d'un point 3D sur le miroir puis sur le plan caméra aboutit à des nouvelles formulations du TTC pour les caméras catadioptriques. La première formulation est appelée globale basée sur les gradients, elle estime le TTC en exprimant le mouvement apparent en fonction du TTC et l'équation de la surface plane en fonction des coordonnées images et à partir des paramètres de son vecteur normal. Ces deux outils sont intégrés dans l'équation du flot optique (ECMA) afin d'en déduire le TTC. Cette méthode a l'avantage d'être simple rapide et fournit une information supplémentaire sur l'inclinaison de la surface plane. Néanmoins, la méthode globale basée sur les gradients est valable seulement pour les capteurs para-catadioptriques et elle peut être appliquée seulement pour les surfaces planes. La seconde formulation, appelée locale basée sur le flot optique, estime le TTC en utilisant explicitement le mouvement apparent. Cette formulation nous permet de connaître à chaque instant et sur chaque pixel de l'image le TTC à partir du flot optique en ce point. Le calcul du TTC en chaque pixel permet d'obtenir une carte des temps de collision. C'est une méthode plus générale car elle est valable pour tous les capteurs à PVU et elle peut être utilisée pour n'importe quelle forme géométrique d'obstacle. Les deux approches sont validées sur des données de synthèse et des expérimentations réelles

    Controlo visual de robĂ´s manipuladores

    Get PDF
    Tese de Doutoramento em Engenharia Mecânica apresentada ao Instituto Superior Técnico da Universidade Técnica de Lisboa.Na presente tese é abordado o controlo visual de robôs manipuladores. Sobre o tema é apresentado o estado da arte e ainda as ferramentas de visão por computador necessárias à sua implementação. São apresentadas seis contribuições ao controlo visual de robôs manipuladores, nomeadamente o desenvolvimento de um aparato experimental, dois controladores visuais dinâmicos, a aplicação de filtros fuzzy ao controlo visual cinemático, a modelação fuzzy do sistema robô-câmara e o controlo fuzzy do sistema baseado no modelo inverso. O aparato experimental desenvolvido é composto por três partes, nomeadamente um robô manipulador planar de dois graus de liberdade, um sistema de visão com 50Hz de frequência de amostragem e o software desenvolvido para controlar e interligar os dois componentes anteriores. O aparato experimental desenvolvido permitiu validar experimentalmente, em tempo real, os controladores propostos nesta tese. O controlo visual dinâmico actua directamente os motores do robô, em contraste com o controlo visual cinemático que gera uma velocidade de junta a seguir pelo robô, através da utilização de um controlo interno em velocidade. A primeira contribuição ao controlo visual dinâmico é um controlador baseado na imagem, especialmente desenvolvido para o robô do aparato experimental, na configuração eye-in-hand. A segunda contribuição é o desenvolvimento de um controlador visual dinâmico baseado em posição para a configuração eye-in-hand, não estando restringido a um número fixo de graus de liberdade do robô. É ainda demonstrada a estabilidade assimptótica de ambos os controladores. A aplicação de lógica fuzzy ao controlo visual cinemático de robôs manipuladores baseado na imagem, revelou três contribuições. Com a aplicação de filtros fuzzy ao controlo visual cinemático, com planeamento de trajectórias ou em regulação, o desempenho do controlador é melhorado, i.e. as velocidades de junta do robô diminuem nos instantes iniciais e o carácter oscilatório destas é atenuado quando o tempo de amostragem de visão é elevado. Foi obtido o modelo inverso do sistema robô-câmara através de modelação fuzzy, tendo sido desenvolvida uma metodologia conducente à obtenção do referido modelo. O modelo inverso fuzzy é utilizado como controlador do sistema robô-câmara, com o objectivo de fornecer as velocidades de junta capazes de mover o robô para a posição desejada. Foi ainda utilizado um compensador fuzzy para compensar eventuais discrepâncias entre o modelo obtido e o sistema real.ABSTRACT: The work in thesis aims at the visual control of robotic manipulators, i.e. visual servoing. It is presented the state-of-the-art on the subject and the computer vision tools needed to its implementation. In this thesis are presented six contributions to visual servoing, namely the development of an experimental apparatus, two dynamic visual servoing con- trollers, the application of fuzzy filters to kinematic visual servoing, the fuzzy modeling of the robot-camera system and the fuzzy control based on the inverse model. The experimental apparatus has three different components, namely a planar robotic manipulator with two degrees of freedom, a 50 Hz vision system and the developed software to control and inter-connect the two previous components. The developed experimental apparatus allowed the real-time experimental validation of the controllers proposed in this thesis. The robot joint actuators are directly driven by dynamic visual servoing, in opposition to kinematic visual servoing that generates the joint velocities needed to drive the robot, by means of an inner velocity control loop. The first contribution to dynamic visual servoing is an image based control law specially developed to the robot of the experimental apparatus, with the eye-in-hand. The second contribution is a position based control law to the eye-in-hand configuration, applicable to robots with more than two degrees of freedom. For both the controllers the asymptotic stability is demonstrated. The application of fuzzy logic to image based kinematic visual servoing, revealed three contributions. With the application of fuzzy filters to path planning and to regulator control, the overall performance of visual servoing is improved. The robot joint velocities diminish at the initial control steps and its oscillatory behavior is also diminished when the vision sample time is high. The inverse model of the robot-camera system is obtained by means of fuzzy modeling. A practical methodology for obtaining the model is also presented. The fuzzy inverse model is directly used as the controller of the robot-camera system, in order to deliver the joint velocities, needed to drive the robot to the desired position. It was also used a fuzzy compensator to compensate possible mismatches between the obtained model and the robot-camera system.Fundação para a Ciência e Tecnologi
    corecore