34 research outputs found

    Visual Servoing Schemes for Automatic Nanopositioning Under Scanning Electron Microscope.

    No full text
    International audienceThis paper presents two visual servoing approaches for nanopositioning in a scanning electron microscope (SEM). The first approach uses the total pixel intensities of an image as visual measurements for designing the control law. The positioning error and the platform control are directly linked with the intensity variations. The second approach is a frequency domain method that uses Fourier transform to compute the relative motion between images. In this case, the control law is designed to minimize the error i.e. the 2D motion between current and desired images by controlling the positioning platform movement. Both methods are validated at different experimental conditions for a task of positioning silicon microparts using a piezo-positioning platform. The obtained results demonstrate the efficiency and robustness of the developed methods

    A direct visual servoing scheme for automatic nanopositioning.

    Get PDF
    International audienceThis paper demonstrates an accurate nanopositioning scheme based on a direct visual servoing process. This technique uses only the pure image signal (photometric information) to design the visual servoing control law. With respect to traditional visual servoing approaches that use geometric visual features (points, lines ...), the visual features used in the control law is the pixel intensity. The proposed approach has been tested in term of accuracy and robustness in several experimental conditions. The obtained results have demonstrated a good behavior of the control law and very good positioning accuracy. The obtained accuracies are 89 nm, 14 nm, and 0.001 degrees in the x, y and axes of a positioning platform, respectively

    3D Spectral Domain Registration-Based Visual Servoing

    Full text link
    This paper presents a spectral domain registration-based visual servoing scheme that works on 3D point clouds. Specifically, we propose a 3D model/point cloud alignment method, which works by finding a global transformation between reference and target point clouds using spectral analysis. A 3D Fast Fourier Transform (FFT) in R3 is used for the translation estimation, and the real spherical harmonics in SO(3) are used for the rotations estimation. Such an approach allows us to derive a decoupled 6 degrees of freedom (DoF) controller, where we use gradient ascent optimisation to minimise translation and rotational costs. We then show how this methodology can be used to regulate a robot arm to perform a positioning task. In contrast to the existing state-of-the-art depth-based visual servoing methods that either require dense depth maps or dense point clouds, our method works well with partial point clouds and can effectively handle larger transformations between the reference and the target positions. Furthermore, the use of spectral data (instead of spatial data) for transformation estimation makes our method robust to sensor-induced noise and partial occlusions. We validate our approach by performing experiments using point clouds acquired by a robot-mounted depth camera. Obtained results demonstrate the effectiveness of our visual servoing approach.Comment: Accepted to 2023 IEEE International Conference on Robotics and Automation (ICRA'23

    Visual Servoing-Based approach for efficient autofocusing in Scanning Electron Microscope.

    No full text
    International audienceFast and reliable autofocusing methods are essential for performing automatic nano-objects positioning tasks using a scanning electron microscope (SEM). So far in the literature, various autofocusing algorithms have been proposed utilizing a sharpness measure to compute the best focus. Most of them are based on iterative search approaches; applying the sharpness function over the total range of focus to find an image in-focus. In this paper, a new, fast and direct method of autofocusing has been presented based on the idea of traditional visual servoing to control the focus step using an adaptive gain. The visual control law is validated using a normalized variance sharpness function. The obtained experimental results demonstrate the performance of the proposed autofocusing method in terms of accuracy, speed and robustness

    Shearlet-based vs. Photometric-based Visual Servoing for Robot-assisted Medical Applications

    Get PDF
    International audience— This paper deals with the development of a vision-based controller for robot-assisted medical applications. It concerns the use of shearlet coefficients in case of ultrasounds (US) images as visual signal inputs and the design of the associated interaction matrix. The proposed controller was validated in both simulation and on an experimental test bench which consists of a robotic arm holding an US probe in contact with a realistic abdominal phantom. Also, the proposed control scheme was compared to the photometry-based visual servoing approach in order to evaluate its efficiency in different conditions of use (nominal and unfavorable conditions)

    Positioning accuracy characterization of assembled microscale components for micro-optical benches

    No full text
    International audienceThis paper deals with the measurement of microscale components' positioning accuracies used in the assembly of Micro-Optical Benches (MOB). The concept of MOB is presented to explain how to build optical MEMS based on out-of-plane micro-assembly of microcomponents. The micro-assembly platform is then presented and used to successfully assemble MOB. This micro-assembly platform includes a laser sensor that enables the measure of the microcomponent's position after its assembly. The measurement set-up and procedure is displayed and applied on several micro-assembly sets. The measurement system provides results with a maximum deviation less than +/- 0.005°. Based on this measurement system and micro-assembly procedure, the article shows that it is possible to obtain a positioning errors down to 0.009°. These results clearly state that micro-assembly is a possible way to manufacture complex, heterogeneous and 3D optical MEMS with very good optical performances

    Workshop on "Control issues in the micro / nano - world".

    No full text
    International audienceDuring the last decade, the need of systems with micro/nanometers accuracy and fast dynamics has been growing rapidly. Such systems occur in applications including 1) micromanipulation of biological cells, 2) micrassembly of MEMS/MOEMS, 3) micro/nanosensors for environmental monitoring, 4) nanometer resolution imaging and metrology (AFM and SEM). The scale and requirement of such systems present a number of challenges to the control system design that will be addressed in this workshop. Working in the micro/nano-world involves displacements from nanometers to tens of microns. Because of this precision requirement, environmental conditions such as temperature, humidity, vibration, could generate noise and disturbance that are in the same range as the displacements of interest. The so-called smart materials, e.g., piezoceramics, magnetostrictive, shape memory, electroactive polymer, have been used for actuation or sensing in the micro/nano-world. They allow high resolution positioning as compared to hinges based systems. However, these materials exhibit hysteresis nonlinearity, and in the case of piezoelectric materials, drifts (called creep) in response to constant inputs In the case of oscillating micro/nano-structures (cantilever, tube), these nonlinearities and vibrations strongly decrease their performances. Many MEMS and NEMS applications involve gripping, feeding, or sorting, operations, where sensor feedback is necessary for their execution. Sensors that are readily available, e.g., interferometer, triangulation laser, and machine vision, are bulky and expensive. Sensors that are compact in size and convenient for packaging, e.g., strain gage, piezoceramic charge sensor, etc., have limited performance or robustness. To account for these difficulties, new control oriented techniques are emerging, such as[d the combination of two or more ‘packageable' sensors , the use of feedforward control technique which does not require sensors, and the use of robust controllers which account the sensor characteristics. The aim of this workshop is to provide a forum for specialists to present and overview the different approaches of control system design for the micro/nano-world and to initiate collaborations and joint projects

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    MICROCANTILEVER-BASED FORCE SENSING, CONTROL AND IMAGING

    Get PDF
    This dissertation presents a distributed-parameters base modeling framework for microcantilever (MC)-based force sensing and control with applications to nanomanipulation and imaging. Due to the widespread applications of MCs in nanoscale force sensing or atomic force microscopy with nano-Newton to pico-Newton force measurement requirements, precise modeling of the involved MCs is essential. Along this line, a distributed-parameters modeling framework is proposed which is followed by a modified robust controller with perturbation estimation to target the problem of delay in nanoscale imaging and manipulation. It is shown that the proposed nonlinear model-based controller can stabilize such nanomanipulation process in a very short time compared to available conventional methods. Such modeling and control development could pave the pathway towards MC-based manipulation and positioning. The first application of the MC-based (a piezoresistive MC) force sensors in this dissertation includes MC-based mass sensing with applications to biological species detection. MC-based sensing has recently attracted extensive interest in many chemical and biological applications due to its sensitivity, extreme applicability and low cost. By measuring the stiffness of MCs experimentally, the effect of adsorption of target molecules can be quantified. To measure MC\u27s stiffness, an in-house nanoscale force sensing setup is designed and fabricated which utilizes a piezoresistive MC to measure the force acting on the MC\u27s tip with nano-Newton resolution. In the second application, the proposed MC-based force sensor is utilized to achieve a fast-scan laser-free Atomic Force Microscopy (AFM). Tracking control of piezoelectric actuators in various applications including scanning probe microscopes is limited by sudden step discontinuities within time-varying continuous trajectories. For this, a switching control strategy is proposed for effective tracking of such discontinuous trajectories. A new spiral path planning is also proposed here which improves scanning rate of the AFM. Implementation of the proposed modeling and controller in a laser-free AFM setup yields high quality image of surfaces with stepped topographies at frequencies up to 30 Hz. As the last application of the MC-based force sensors, a nanomanipulator named here MM3AÂź is utilized for nanomanipulation purposes. The area of control and manipulation at the nanoscale has recently received widespread attention in different technologies such as fabricating electronic chipsets, testing and assembly of MEMS and NEMS, micro-injection and manipulation of chromosomes and genes. To overcome the lack of position sensor on this particular manipulator, a fused vision force feedback robust controller is proposed. The effects of utilization of the image and force feedbacks are individually discussed and analyzed for use in the developed fused vision force feedback control framework in order to achieve ultra precise positioning and optimal performance
    corecore