13 research outputs found

    Modular architecture of the microfactories for automatic micro-assembly.

    No full text
    International audienceThe construction of a new generation of MEMS which includes micro-assembly steps in the current microfabrication process is a big challenge. It is necessary to develop new production means named micromanufacturing systems in order to perform these new assembly steps. The classical approach called “top-down” which consists in a functional analysis and a definition of the tasks sequences is insufficient for micromanufacturing systems. Indeed, the technical and physical constraints of the microworld (e.g. the adhesion phenomenon) must be taken into account in order to design reliable micromanufacturing systems. A new method of designing micromanufacturing systems is presented in this paper. Our approach combines the general “top-down” approach with a “bottom-up” approach which takes into account technical constraints. The method enables to build a modular architecture for micromanufacturing systems. In order to obtain this modular architecture, we have devised an original identification technique of modules and an association technique of modules. This work has been used to design the controller of an experimental robotic micro-assembly station

    Vision Based Automatic Calibration of Microrobotic System

    Get PDF
    During the last decade, the advancement of microrobotics has provided a powerful tool for micromanipulation in various fields including living cell manipulation, MEMS/MOEMS assembly, and micro-/nanoscale material characterization. Several dexterous micromanipulation systems have been developed and demonstrated. Nowadays, the research on micromanipulation has shifted the scope from the conceptual system development to the industrial applications. Consequently, the future development of this field lies on the industrial applicability of systems that aims to convert the micromanipulation technique to the mass manufacturing process. In order to achieve this goal, the automatic microrobotic system, as the core in the process chain, plays a significant role. This thesis focuses on the calibration procedure of the positioning control, which is one of the fundamental issues during the automatic microrobotic system development. A novel vision based procedure for three dimensional (3D) calibrations of micromanipulators is proposed. Two major issues in the proposed calibration approach - vision system calibration and manipulator kinematic calibration - are investigated in details in this thesis. For the stereo vision measurement system, the calibration principle and algorithm are presented. Additionally, the manipulator kinematic calibration is carried out in four steps: kinematic modeling, data acquisition, parameter estimation, and compensation implementation. The procedures are presented with two typical models: the matrix model and the polynomial model. Finally, verification and evaluation experiments are conducted on the microrobotic fiber characterization platform in the Micro- and Nano Systems Research Group (MST) at Tampere University of Technology. The results demonstrate that the proposed calibration models are able to reduce the prediction error below 2.59 micrometers. With those models, the pose error, compensated by the feed-forward compensator, can be reduced to be smaller than 5 µm. The proposed approach also demonstrates the feasibility in calibrating the decoupled motions, by reducing the undesired movement from 28 µm to 8 µm (For 4800 µm desired movement)

    Vision Based Automatic Calibration of Microrobotic System

    Get PDF
    During the last decade, the advancement of microrobotics has provided a powerful tool for micromanipulation in various fields including living cell manipulation, MEMS/MOEMS assembly, and micro-/nanoscale material characterization. Several dexterous micromanipulation systems have been developed and demonstrated. Nowadays, the research on micromanipulation has shifted the scope from the conceptual system development to the industrial applications. Consequently, the future development of this field lies on the industrial applicability of systems that aims to convert the micromanipulation technique to the mass manufacturing process. In order to achieve this goal, the automatic microrobotic system, as the core in the process chain, plays a significant role. This thesis focuses on the calibration procedure of the positioning control, which is one of the fundamental issues during the automatic microrobotic system development. A novel vision based procedure for three dimensional (3D) calibrations of micromanipulators is proposed. Two major issues in the proposed calibration approach - vision system calibration and manipulator kinematic calibration - are investigated in details in this thesis. For the stereo vision measurement system, the calibration principle and algorithm are presented. Additionally, the manipulator kinematic calibration is carried out in four steps: kinematic modeling, data acquisition, parameter estimation, and compensation implementation. The procedures are presented with two typical models: the matrix model and the polynomial model. Finally, verification and evaluation experiments are conducted on the microrobotic fiber characterization platform in the Micro- and Nano Systems Research Group (MST) at Tampere University of Technology. The results demonstrate that the proposed calibration models are able to reduce the prediction error below 2.59 micrometers. With those models, the pose error, compensated by the feed-forward compensator, can be reduced to be smaller than 5 µm. The proposed approach also demonstrates the feasibility in calibrating the decoupled motions, by reducing the undesired movement from 28 µm to 8 µm (For 4800 µm desired movement)

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Design and Development of Sensor Integrated Robotic Hand

    Get PDF
    Most of the automated systems using robots as agents do use few sensors according to the need. However, there are situations where the tasks carried out by the end-effector, or for that matter by the robot hand needs multiple sensors. The hand, to make the best use of these sensors, and behave autonomously, requires a set of appropriate types of sensors which could be integrated in proper manners. The present research work aims at developing a sensor integrated robot hand that can collect information related to the assigned tasks, assimilate there correctly and then do task action as appropriate. The process of development involves selection of sensors of right types and of right specification, locating then at proper places in the hand, checking their functionality individually and calibrating them for the envisaged process. Since the sensors need to be integrated so that they perform in the desired manner collectively, an integration platform is created using NI PXIe-1082. A set of algorithm is developed for achieving the integrated model. The entire process is first modelled and simulated off line for possible modification in order to ensure that all the sensors do contribute towards the autonomy of the hand for desired activity. This work also involves design of a two-fingered gripper. The design is made in such a way that it is capable of carrying out the desired tasks and can accommodate all the sensors within its fold. The developed sensor integrated hand has been put to work and its performance test has been carried out. This hand can be very useful for part assembly work in industries for any shape of part with a limit on the size of the part in mind. The broad aim is to design, model simulate and develop an advanced robotic hand. Sensors for pick up contacts pressure, force, torque, position, surface profile shape using suitable sensing elements in a robot hand are to be introduced. The hand is a complex structure with large number of degrees of freedom and has multiple sensing capabilities apart from the associated sensing assistance from other organs. The present work is envisaged to add multiple sensors to a two-fingered robotic hand having motion capabilities and constraints similar to the human hand. There has been a good amount of research and development in this field during the last two decades a lot remains to be explored and achieved. The objective of the proposed work is to design, simulate and develop a sensor integrated robotic hand. Its potential applications can be proposed for industrial environments and in healthcare field. The industrial applications include electronic assembly tasks, lighter inspection tasks, etc. Application in healthcare could be in the areas of rehabilitation and assistive techniques. The work also aims to establish the requirement of the robotic hand for the target application areas, to identify the suitable kinds and model of sensors that can be integrated on hand control system. Functioning of motors in the robotic hand and integration of appropriate sensors for the desired motion is explained for the control of the various elements of the hand. Additional sensors, capable of collecting external information and information about the object for manipulation is explored. Processes are designed using various software and hardware tools such as mathematical computation MATLAB, OpenCV library and LabVIEW 2013 DAQ system as applicable, validated theoretically and finally implemented to develop an intelligent robotic hand. The multiple smart sensors are installed on a standard six degree-of-freedom industrial robot KAWASAKI RS06L articulated manipulator, with the two-finger pneumatic SHUNK robotic hand or designed prototype and robot control programs are integrated in such a manner that allows easy application of grasping in an industrial pick-and-place operation where the characteristics of the object can vary or are unknown. The effectiveness of the actual recommended structure is usually proven simply by experiments using calibration involving sensors and manipulator. The dissertation concludes with a summary of the contribution and the scope of further work

    Affordable flexible hybrid manipulator for miniaturised product assembly

    Get PDF
    Miniaturised assembly systems are capable of assembling parts of a few millimetres in size with an accuracy of a few micrometres. Reducing the size and the cost of such a system while increasing its flexibility and accuracy is a challenging issue. The introduction of hybrid manipulation, also called coarse/fine manipulation, within an assembly system is the solution investigated in this thesis. A micro-motion stage (MMS) is designed to be used as the fine positioning mechanism of the hybrid assembly system. MMSs often integrate compliant micro-motion stages (CMMSs) to achieve higher performances than the conventional MMSs. CMMSs are mechanisms that transmit an output force and displacement through the deformation of their structure. Although widely studied, the design and modelling techniques of these mechanisms still need to be improved and simplified. Firstly, the linear modelling of CMMSs is evaluated and two polymer prototypes are fabricated and characterised. It is found that polymer based designs have a low fabrication cost but not suitable for construction of a micro-assembly system. A simplified nonlinear model is then derived and integrated within an analytical model, allowing for the full characterisation of the CMMS in terms of stiffness and range of motion. An aluminium CMMS is fabricated based on the optimisation results from the analytical model and is integrated within an MMS. The MMS is controlled using dual-range positioning to achieve a low-cost positioning accuracy better than 2µm within a workspace of 4.4×4.4mm2. Finally, a hybrid manipulator is designed to assemble mobile-phone cameras and sensors automatically. A conventional robot manipulator is used to pick and place the parts in coarse mode while the aluminium CMMS based MMS is used for fine alignment of the parts. A high-resolution vision system is used to locate the parts on the substrate and to measure the relative position of the manipulator above MMS using a calibration grid with square patterns. The overall placement accuracy of the assembly system is ±24µm at 3σ and can reach 2µm, for a total cost of less than £50k, thus demonstrating the suitability of hybrid manipulation for desktop-size miniaturised assembly systems. The precision of the existing system could be significantly improved by making the manipulator stiffer (i.e. preloaded bearings…) and adjustable to compensate for misalignment. Further improvement could also be made on the calibration of the vision system. The system could be either scaled up or down using the same architecture while adapting the controllers to the scale.Engineering and Physical Sciences Research Council (EPSRC

    High power laser systems with actuated beam delivery

    Get PDF

    Microoptical multi aperture imaging systems

    Get PDF
    Die Verkleinerung digitaler Einzelapertur-Abbildungssysteme erreicht aktuell physikalische sowie technische Limits. Die Miniaturisierung führt zu einer Verringerung sowohl des Auflösungsvermögens als auch des Signal-Rausch-Verhältnisses. Einen Ausweg zeigen die Prinzipien der kleinsten in der Natur bekannten Sehsysteme - die Facettenaugen. Die parallelisierte Anordnung einer großen Anzahl von Optiken ermöglicht, trotz der geringen Baugröße, eine große Informationsmenge aus einem ausgedehnten Gesichtsfeld zu übertragen. Ziel ist es, die Vorteile natürlicher Facettenaugen zu analysieren und diese zur Überwindung aktueller Grenzen der Miniaturisierung von digitalen Kameras zu adaptieren. Durch die Synergie von Optik, Opto-Elektronik und Bildverarbeitung wird die Miniaturisierung unter Erreichung praxisrelevanter Parameter angestrebt. Dafür wurde eine systematische Einteilung bereits bekannter und neuartiger Prinzipien von Multiapertur-Abbildungssystemen vorgenommen. Das grundlegende Verständnis der Vor- und Nachteile sowie des Skalierungsverhaltens der verschiedenen Ansätze ermöglichte die detaillierte Untersuchung der zwei erfolgversprechendsten Systemklassen. Für die Auslegung der Multiapertur-Optiken wurde eine Kombination aus Ansätzen des klassischen Optikdesigns und neuen semi-automatisierten Simulations- und Optimierungsmethoden mittels Ray-Tracing angewandt. Die mit natürlichen Facettenaugen vergleichbare Größe der Optiken ermöglichte die Verwendung mikrooptischer Herstellungsverfahren im Wafermaßstab. Es wurden Prototypen experimentell untersucht und die simulierten Systemparameter mit Hilfe der für die Multiapertur Anordnungen angepassten Messmethoden bestätigt. Die dargestellten Lösungen demonstrieren grundsätzlich neue Ansätze für den Bereich der hochauflösenden, miniaturisierten Abbildungsoptik, die kleinste Baulängen bei gegebenem Auflösungsvermögen erzielen. Somit sind sie im Stande die Skalierungslimits der Einzelapertur-Abbildungsoptik zu überwinden

    Advanced LIDAR-based techniques for autonomous navigation of spaceborne and airborne platforms

    Get PDF
    The main goal of this PhD thesis is the development and performance assessment of innovative techniques for the autonomous navigation of aerospace platforms by exploiting data acquired by electro-optical sensors. Specifically, the attention is focused on active LIDAR systems since they globally provide a higher degree of autonomy with respect to passive sensors. Two different areas of research are addressed, namely the autonomous relative navigation of multi-satellite systems and the autonomous navigation of Unmanned Aerial Vehicles. The global aim is to provide solutions able to improve estimation accuracy, computational load, and overall robustness and reliability with respect to the techniques available in the literature. In the space field, missions like on-orbit servicing and active debris removal require a chaser satellite to perform autonomous orbital maneuvers in close-proximity of an uncooperative space target. In this context, a complete pose determination architecture is here proposed, which relies exclusively on three-dimensional measurements (point clouds) provided by a LIDAR system as well as on the knowledge of the target geometry. Customized solutions are envisaged at each step of the pose determination process (acquisition, tracking, refinement) to ensure adequate accuracy level while simultaneously limiting the computational load with respect to other approaches available in the literature. Specific strategies are also foreseen to ensure process robustness by autonomously detecting algorithms' failures. Performance analysis is realized by means of a simulation environment which is conceived to realistically reproduce LIDAR operation, target geometry, and multi-satellite relative dynamics in close-proximity. An innovative method to design trajectories for target monitoring, which are reliable for on-orbit servicing and active debris removal applications since they satisfy both safety and observation requirements, is also presented. On the other hand, the problem of localization and mapping of Unmanned Aerial Vehicles is also tackled since it is of utmost importance to provide autonomous safe navigation capabilities in mission scenarios which foresee flights in complex environments, such as GPS denied or challenging. Specifically, original solutions are proposed for the localization and mapping steps based on the integration of LIDAR and inertial data. Also in this case, particular attention is focused on computational load and robustness issues. Algorithms' performance is evaluated through off-line simulations carried out on the basis of experimental data gathered by means of a purposely conceived setup within an indoor test scenario
    corecore