170 research outputs found

    Tracking with a pan-tilt-zoom camera for an ACC system

    Get PDF
    International audienceIn this paper, visual perception of frontal view in intelligent cars is considered. A Pan-Tilt-Zoom (PTZ) camera is used to track preceding vehicles. The aim of this work is to keep the rear view image of the target vehicle stable in scale and position. An efficient real time tracking algorithm is integrated. It is a generic and robust approach, particularly well suited for the detection of scale changes. The camera rotations and zoom are controlled by visual servoing. The methods presented here were tested on real road sequences within the VELAC demonstration vehicle. Experimental results show the effectiveness of such an approach. The perspectives are in the development of a visual sensor combining a PTZ camera and a standard camera. The standard camera has small focal length and is devoted to an analysis of the whole frontal scene. The PTZ camera gives a local view of this scene to increase sensor range and precision

    Multi-Focal Visual Servoing Strategies

    Get PDF
    Multi-focal vision provides two or more vision devices with different fields of view and measurement accuracies. A main advantage of this concept is a flexible allocation of these sensor resources accounting for the current situational and task performance requirements. Particularly, vision devices with large fields of view and low accuracies can be use

    Challenges in flexible microsystem manufacturing : fabrication, robotic assembly, control, and packaging.

    Get PDF
    Microsystems have been investigated with renewed interest for the last three decades because of the emerging development of microelectromechanical system (MEMS) technology and the advancement of nanotechnology. The applications of microrobots and distributed sensors have the potential to revolutionize micro and nano manufacturing and have other important health applications for drug delivery and minimal invasive surgery. A class of microrobots studied in this thesis, such as the Solid Articulated Four Axis Microrobot (sAFAM) are driven by MEMS actuators, transmissions, and end-effectors realized by 3-Dimensional MEMS assembly. Another class of microrobots studied here, like those competing in the annual IEEE Mobile Microrobot Challenge event (MMC) are untethered and driven by external fields, such as magnetic fields generated by a focused permanent magnet. A third class of microsystems studied in this thesis includes distributed MEMS pressure sensors for robotic skin applications that are manufactured in the cleanroom and packaged in our lab. In this thesis, we discuss typical challenges associated with the fabrication, robotic assembly and packaging of these microsystems. For sAFAM we discuss challenges arising from pick and place manipulation under microscopic closed-loop control, as well as bonding and attachment of silicon MEMS microparts. For MMC, we discuss challenges arising from cooperative manipulation of microparts that advance the capabilities of magnetic micro-agents. Custom microrobotic hardware configured and demonstrated during this research (such as the NeXus microassembly station) include micro-positioners, microscopes, and controllers driven via LabVIEW. Finally, we also discuss challenges arising in distributed sensor manufacturing. We describe sensor fabrication steps using clean-room techniques on Kapton flexible substrates, and present results of lamination, interconnection and testing of such sensors are presented

    Robotic Mobile Manipulation Experiments at the U.S. Army Maneuver Support Center

    Full text link

    Asservissement visuel et commande de la distance focale

    Get PDF
    Cet article présente l'application de techniques d'asservissement visuel désormais classiques au cas où l'une des variables de commande est la distance focale. Plus précisément, on suppose disposer d'un zoom commandable en vitesse en complément des six autres mouvements pilotés de la caméra. On s'intéresse au problème des quatre points formant un carré, et l'on définit deux fonctions à réguler, l'une s'interprétant en terme de centrage, l'autre associée à la taille globale de la cible dans l'image. On propose ensuite des lois de commande non-linéaires simples dont on étudiele comportement théorique. On montre également l'existence de deux solutions, et l'on donne quelques résultats de simulation en conclusio

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Mobile robot vavigation using a vision based approach

    Get PDF
    PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially cluttered indoor environment using a mapless navigation strategy. The work focuses on two key problems, namely vision based obstacle avoidance and vision based reactive navigation strategy. The estimation of optical flow plays a key role in vision based obstacle avoidance problems, however the current view is that this technique is too sensitive to noise and distortion under real conditions. Accordingly, practical applications in real time robotics remain scarce. This dissertation presents a novel methodology for vision based obstacle avoidance, using a hybrid architecture. This integrates an appearance-based obstacle detection method into an optical flow architecture based upon a behavioural control strategy that includes a new arbitration module. This enhances the overall performance of conventional optical flow based navigation systems, enabling a robot to successfully move around without experiencing collisions. Behaviour based approaches have become the dominant methodologies for designing control strategies for robot navigation. Two different behaviour based navigation architectures have been proposed for the second problem, using monocular vision as the primary sensor and equipped with a 2-D range finder. Both utilize an accelerated version of the Scale Invariant Feature Transform (SIFT) algorithm. The first architecture employs a qualitative-based control algorithm to steer the robot towards a goal whilst avoiding obstacles, whereas the second employs an intelligent control framework. This allows the components of soft computing to be integrated into the proposed SIFT-based navigation architecture, conserving the same set of behaviours and system structure of the previously defined architecture. The intelligent framework incorporates a novel distance estimation technique using the scale parameters obtained from the SIFT algorithm. The technique employs scale parameters and a corresponding zooming factor as inputs to train a neural network which results in the determination of physical distance. Furthermore a fuzzy controller is designed and integrated into this framework so as to estimate linear velocity, and a neural network based solution is adopted to estimate the steering direction of the robot. As a result, this intelligent iv approach allows the robot to successfully complete its task in a smooth and robust manner without experiencing collision. MS Robotics Studio software was used to simulate the systems, and a modified Pioneer 3-DX mobile robot was used for real-time implementation. Several realistic scenarios were developed and comprehensive experiments conducted to evaluate the performance of the proposed navigation systems. KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant Feature Transforms, Intelligent framework
    • …
    corecore