14 research outputs found

    Highly precise micropositioning task using a direct visual servoing scheme.

    Get PDF
    International audienceThis paper demonstrates a precise micropositioning scheme based on a direct visual servoing process. This technique uses only the pure image signal (photometric information) to design the control law. With respect to traditional visual servoing approaches that use geometric visual features (points, lines, ...), the visual features used in the control law is nothing but the pixel luminance. The proposed approach was tested in term of precision and robustness in several experimental conditions. The obtained results have demonstrated a good behavior of the control law and very good positioning precision. The obtained precisions are 89 nanometers, 14 nanometers, and 0.001 degrees in the x, y and axes of positioning platform, respectively

    Model free visual servoing in macro and micro domain robotic applications

    Get PDF
    This thesis explores model free visual servoing algorithms by experimentally evaluating their performances for various tasks performed both in macro and micro domains. Model free or so called uncalibrated visual servoing does not need the system (vision system + robotic system) calibration and the model of the observed scene, since it provides an online estimation of the composite (image + robot) Jacobian. It is robust to parameter changes and disturbances. A model free visual servoing scheme is tested on a 7 DOF Mitsubishi PA10 robotic arm and on a microassembly workstation which is developed in our lab. In macro domain, a new approach for planar shape alignment is presented. The alignment task is performed based on bitangent points which are acquired using convex-hull of a curve. Both calibrated and uncalibrated visual servoing schemes are employed and compared. Furthermore, model free visual servoing is used for various trajectory following tasks such as square, circle, sine etc. and these reference trajectories are generated by a linear interpolator which produces midway targets along them. Model free visual servoing can provide more exibility in microsystems, since the calibration of the optical system is a tedious and error prone process, and recalibration is required at each focusing level of the optical system. Therefore, micropositioning and three di erent trajectory following tasks are also performed in micro world. Experimental results validate the utility of model free visual servoing algorithms in both domains

    3D reconstruction, classification and mechanical characterization of microstructures

    Get PDF
    Modeling and classifying 3D microstructures are important steps in precise micro-manipulation. This thesis explores some of the visual reconstruction and classification algorithms for 3D microstructures used in micromanipulation. Mechanical characterization of microstructures has also been considered. In particular, visual reconstruction algorithm (shape from focus - SFF) uses 2D image sequence of a microscopic object captured at different focusing levels to create a 3D range image. Then, the visual classification algorithm takes the range image as an input and applies a curvature-based segmentation method, HK segmentation, which is based on differential geometry. The object is segmented into surface patches according to the curvature of its surface. It is shown that the visual reconstruction algorithm works successfully for synthetic and real image data. The range images are used to classify the surfaces of the micro objects according to their curvatures in the HK segmentation algorithm. Also, a mechanical property characterization technique for cell and embryo is presented. A zebrafish embryo chorion is mechanically characterized using cell boundary deformation. Elastic modulus and developmental stage of the embryo are obtained successfully using visual information. In addition to these, calibrated image based visual servoing algorithm is experimentally evaluated for various tasks in micro domain. Experimental results on optical system calibration and image-based visual servoing in micropositioning and trajectory following tasks are presented

    A Laser-based Dual-arm System for Precise Control of Collaborative Robots

    Full text link
    Collaborative robots offer increased interaction capabilities at relatively low cost but in contrast to their industrial counterparts they inevitably lack precision. Moreover, in addition to the robots' own imperfect models, day-to-day operations entail various sources of errors that despite being small rapidly accumulate. This happens as tasks change and robots are re-programmed, often requiring time-consuming calibrations. These aspects strongly limit the application of collaborative robots in tasks demanding high precision (e.g. watch-making). We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception. Our approach leverages previous knowledge of object 3D models in combination with point cloud registration to efficiently extract relevant poses and compute corrective trajectories. This results in high-precision assembly behaviors. The approach is validated in a needle threading experiment, with a 150{\mu}m thread and a 300{\mu}m hole, and a USB insertion task using two 7-axis Panda robots

    Robotic Micromanipulation and Microassembly using Mono-view and Multi-scale visual servoing.

    No full text
    International audienceThis paper investigates sequential robotic micromanipulation and microassembly in order to build 3-D microsystems and devices. A mono-view and multiple scale 2-D visual control scheme is implemented for that purpose. The imaging system used is a photon video microscope endowed with an active zoom enabling to work at multiple scales. It is modelled by a non-linear projective method where the relation between the focal length and the zoom factor is explicitly established. A distributed robotic system (xy system, z system) with a twofingers gripping system is used in conjunction with the imaging system. The results of experiments demonstrate the relevance of the proposed approaches. The tasks were performed with the following accuracy: 1.4 m for the positioning error, and 0.5 for the orientation error

    Modeling and experimental validation of a parallel microrobot for biomanipulation

    Get PDF
    The main purpose of this project is the development of a commercial micropositioner's (SmarPod 115.25, SmarAct GmbH) geometrical model. SmarPod is characterized by parallel kinematics and is employed for precise and accurate sample's positioning under SEM microscope, being vacuum-compatible, for various applications. Geometrical modeling represents the preliminar step to fully understand, and possibly improve, robot's closed loop behaviour in terms of task's quality precision, when enterprises does not provide sufficient documentation. The robotic system, in fact, represents in this case a "black box" from which it's possible to extract information. This step is essential in order to improve, consequently, the reliability of bio-microsystem manipulation and characterization. Disposing of a detailed microrobot's model becomes essential to deal with the typical lack of sensing at microscale, as it allows a 3D precise and adequate reconstruction, realized through proper softwares, of the manipulation set-up. The roles of Virtual Reality (VR) and of simulations, carried out, in this case, in Blender environment, are asserted as well as an essential helping tool in mycrosystem's task planning. Blender is a professional free and open-source 3D computer graphics software and it is proven to be a basic instrument to validate microrobot's model, even to simplify it in case of complex system's geometries

    Modeling, simulation and control of microrobots for the microfactory.

    Get PDF
    Future assembly technologies will involve higher levels of automation in order to satisfy increased microscale or nanoscale precision requirements. Traditionally, assembly using a top-down robotic approach has been well-studied and applied to the microelectronics and MEMS industries, but less so in nanotechnology. With the boom of nanotechnology since the 1990s, newly designed products with new materials, coatings, and nanoparticles are gradually entering everyone’s lives, while the industry has grown into a billion-dollar volume worldwide. Traditionally, nanotechnology products are assembled using bottom-up methods, such as self-assembly, rather than top-down robotic assembly. This is due to considerations of volume handling of large quantities of components, and the high cost associated with top-down manipulation requiring precision. However, bottom-up manufacturing methods have certain limitations, such as components needing to have predefined shapes and surface coatings, and the number of assembly components being limited to very few. For example, in the case of self-assembly of nano-cubes with an origami design, post-assembly manipulation of cubes in large quantities and cost-efficiency is still challenging. In this thesis, we envision a new paradigm for nanoscale assembly, realized with the help of a wafer-scale microfactory containing large numbers of MEMS microrobots. These robots will work together to enhance the throughput of the factory, while their cost will be reduced when compared to conventional nanopositioners. To fulfill the microfactory vision, numerous challenges related to design, power, control, and nanoscale task completion by these microrobots must be overcome. In this work, we study two classes of microrobots for the microfactory: stationary microrobots and mobile microrobots. For the stationary microrobots in our microfactory application, we have designed and modeled two different types of microrobots, the AFAM (Articulated Four Axes Microrobot) and the SolarPede. The AFAM is a millimeter-size robotic arm working as a nanomanipulator for nanoparticles with four degrees of freedom, while the SolarPede is a light-powered centimeter-size robotic conveyor in the microfactory. For mobile microrobots, we have introduced the world’s first laser-driven micrometer-size locomotor in dry environments, called ChevBot to prove the concept of the motion mechanism. The ChevBot is fabricated using MEMS technology in the cleanroom, following a microassembly step. We showed that it can perform locomotion with pulsed laser energy on a dry surface. Based on the knowledge gained with the ChevBot, we refined tits fabrication process to remove the assembly step and increase its reliability. We designed and fabricated a steerable microrobot, the SerpenBot, in order to achieve controllable behavior with the guidance of a laser beam. Through modeling and experimental study of the characteristics of this type of microrobot, we proposed and validated a new type of deep learning controller, the PID-Bayes neural network controller. The experiments showed that the SerpenBot can achieve closed-loop autonomous operation on a dry substrate

    The optimal use of vision as part of the manipulation of micron-sized objects

    Get PDF
    This project concerns the development and integration of a sub-millimeter objects manipulation setup and will take part in a CTI project named “Manipulating Microscale Objects with Nanoscale Precision”. On the way of manipulating microscale objects, we need to build a first setup adapted for sub-millimeter objects in order to be able to perform experiences and to validate some assumptions and choices. In the Laboratoire de Systèmes Robotiques (LSRO), ultra high precision parallel robots are developed and, in particular, the Delta3 a micromanipulator that presents three degrees of freedom (XYZ) and has a range of 4mm. This robot might be used within this project. The goal of this project is to develop an interface between vision system, robot and user that will allow measuring the position repeatability of different microscale objects during a manipulation task

    Force Sensing and Control in Micromanipulation

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Microrobotique et Micromécatronique pour la Réalisation de Tâches de Micro-Assemblage Complexes et Précises.

    Get PDF
    Ce document présente une synthèse de mes contributions scientifiques aux domaines de la microrobotique et de la micromécatronique ainsi que des transferts effectués, tant à destination de l’industrie que de l’enseignement. Les travaux conduits sont orientés vers la réalisation de tâches de micro-assemblage complexes, précises et automatisées par approche microrobotique et sont plus particulièrement appliqués aux MOEMS.L’échelle micrométrique considérée induit de nombreuses spécificités qui se traduisent par un déficit notable de connaissances du comportement des systèmes à cette échelle. Pour cela, une première partie des travaux est dédiée à l’étude et à la modélisation multiphysique des systèmes microrobotiques et micromécatroniques. Cette connaissance a conduit, dans une seconde partie des travaux, à la proposition de nouveaux principes de mesure et d’actionnement mais également au développement de microsystèmes complexes, instrumentés et intégrés (micro-banc-optique, micropince, plateformes compliantes). Enfin, des lois de commandes et des stratégies d’assemblage originales ont été proposées notamment une commande dynamique hybride force-position combinant une commande hybride externe et une commande en impédance. Celle-ci permet de maîtriser la dynamique des transitions contact/non-contact critique à l’échelle micrométrique mais également d’automatiser des processus de micro-assemblage complexes. L’ensemble de ces travaux ont fait l’objet de validations expérimentales permettant de quantifier précisément les performances obtenues (exactitude de positionnement, temps de cycle, robustesse…). Les perspectives de ces travaux portent sur la proposition de systèmes microrobotiques et micromécatroniques compacts et intégrés utiles au micro-assemblage haute dynamique ainsi qu’à l’assemblage de composants nanophotoniques
    corecore