111 research outputs found
Visual guidance of unmanned aerial manipulators
The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms.
The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities.
A key competence to control an aerial manipulator is the ability to localize it in the environment.
Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load,
and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors.
With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve
the platform stability or increase arm operability.
The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into
a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided.
All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilà ncia, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic.
L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles.
Una competència clau per controlar un manipulador aeri Ă©s la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localitzaciĂł ha requerit d’infraestructura sensorial externa (GPS, cĂ meres IR, etc.), limitant aixĂ les aplicacions reals. Pel contrari, sistemes de localitzaciĂł exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localitzaciĂł i mapejat simultĂ nis (SLAM), requereixen de gran capacitat de còmput, caracterĂstica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimaciĂł d’estat del robot (posiciĂł, velocitat, orientaciĂł i acceleraciĂł) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència.
Degut a la complexitat fĂsica d’aquests robots, Ă©s necessari l’ús de tècniques de control avançades. GrĂ cies a la seva redundĂ ncia de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultĂ niament, realitzar tasques de manera jerĂ rquica, ordenant-les segons l’impacte en l’acompliment de la missiĂł. En aquest treball es presenten aquestes lleis de control, juntament amb la descripciĂł de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç.
Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localitzaciĂł per dotar d’autonomia el robot. Aquest mètode estĂ especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessĂ ries per guiar el vehicle a partir d’informaciĂł visual. (3) Integrar aquestes accions dins una estructura de control jerĂ rquica utilitzant la redundĂ ncia del robot per complir altres tasques durant el vol. Aquestes tasques son especĂfiques per a manipuladors aeris i tambĂ© es defineixen en aquest document.
Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real
Map-Based Localization for Unmanned Aerial Vehicle Navigation
Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments.
Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments.
The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%
Vision-Based Control of Flexible Robot Systems
This thesis covers the controlling of flexible robot systems by using a camera as a measurement device. To accomplish the purpose of the study, the estimation process of dynamic state variables of flexible link robot has been examined based on camera
measurements. For the purpose of testing two application examples for flexible link have been applied, an algorithm for the dynamic state variables estimation is proposed.
Flexible robots can have very complex dynamic behavior during their operations, which can lead to induced vibrations. Since the vibrations and its derivative are not all measurable, therefore the estimation of state variables plays a significant role in the state feedback control of flexible link robots. A vision sensor (i.e. camera) realizing a contact-less measurement sensor can be used to measure the deflection of flexible robot arm. Using a vision sensor, however, would generate new effects
such as limited accuracy and time delay, which are the main inherent problems of the application of vision sensors within the context. These effects and related compensation approaches are studied in this thesis. An indirect method for link
deflection (i.e. system states) sensing is presented. It uses a vision system consisting of a CCD camera and an image processing unit.
The main purpose of this thesis is to develop an estimation approach combining suitable measurement devices which are easy to realize with improved reliability. It includes designing two state estimators; the first one for the traditional sensor type
(negligible noise and time delay) and the second one is for the camera measurement which account for the dynamic error due to the time delay. The estimation approach is applied first using a single link flexible robot; the dynamic model of the flexible link is derived using a finite element method. Based on the suggested estimation approach, the first observer estimates the vibrations using strain gauge (fast and complete dynamics), and the second observer estimates the vibrations using vision data (slow dynamical parts). In order to achieve an optimal estimation, a proper combination process of the two estimated dynamical parts of the system dynamics is described. The simulation results for the estimations based on vision measurements show that the slow dynamical states can be estimated and the observer can compensate the time delay dynamic errors. It is also observed
that an optimal estimation can be attained by combining slow dynamical estimated states with those of fast observer-based on strain gauge measurement.
Based on suggested estimation approach a vision-based control for elastic shipmounted crane is designed to regulate the motion of the payload. For the observers and the controller design, a linear dynamic model of elastic-ship mounted crane incorporating a finite element technique for modeling flexible link is employed. In order to estimate the dynamic states variables and the unknown disturbance two state observers are designed. The first one estimates the state variables using camera measurement (augmented Kalman filter). The second one used potentiometers measurement (PI-Observer). To realize a multi-model approach of elastic-ship mounted crane, a variable gain controller and variable gain observers are designed. The variable gain controller is used to generate the required damping to control the system based on the estimated states and the roll angle. Simulation results show that the variable gain observers can adequately estimate the states and the unknown disturbance acting on the payload. It is further observed that the variable gain controller can effectively reduce the payload pendulations. Experiments are conducted using
the camera to measure the link deflection of scaled elastic ship-mounted crane system.
The results shown that the variable gain controller based on the combined states observers mitigated the vibrations of the system and the swinging of the payload.
The presented material above is embedded into an interrelated thesis. A concise introduction to the vision-based control and state estimation problems is attached in the first chapter. An extensive survey of available visual servoing algorithms that
include the rigid robot system and the flexible robot system is also presented. The conclusions of the work and suggestions for the future research are provided at the last chapter of this thesis
From plain visualisation to vibration sensing: using a camera to control the flexibilities in the ITER remote handling equipment
Thermonuclear fusion is expected to play a key role in the energy market during the second half of this century, reaching 20% of the electricity generation by 2100. For many years, fusion scientists and engineers have been developing the various technologies required to build nuclear power stations allowing a sustained fusion reaction. To the maximum possible extent, maintenance operations in fusion reactors are performed manually by qualified workers in full accordance with the "as low as reasonably achievable" (ALARA) principle. However, the option of hands-on maintenance becomes impractical, difficult or simply impossible in many circumstances, such as high biological dose rates. In this case, maintenance tasks will be performed with remote handling (RH) techniques.
The International Thermonuclear Experimental Reactor ITER, to be commissioned in southern France around 2025, will be the first fusion experiment producing more power from fusion than energy necessary to heat the plasma. Its main objective is “to demonstrate the scientific and technological feasibility of fusion power for peaceful purposes”. However ITER represents an unequalled challenge in terms of RH system design, since it will be much more demanding and complex than any other remote maintenance system previously designed.
The introduction of man-in-the-loop capabilities in the robotic systems designed for ITER maintenance would provide useful assistance during inspection, i.e. by providing the operator the ability and flexibility to locate and examine unplanned targets, or during handling operations, i.e. by making peg-in-hole tasks easier. Unfortunately, most transmission technologies able to withstand the very specific and extreme environmental conditions existing inside a fusion reactor are based on gears, screws, cables and chains, which make the whole system very flexible and subject to vibrations. This effect is further increased as structural parts of the maintenance equipment are generally lightweight and slender structures due to the size and the arduous accessibility to the reactor.
Several methodologies aiming at avoiding or limiting the effects of vibrations on RH system performance have been investigated over the past decade. These methods often rely on the use of vibration sensors such as accelerometers. However, reviewing market shows that there is no commercial off-the-shelf (COTS) accelerometer that meets the very specific requirements for vibration sensing in the ITER in-vessel RH equipment (resilience to high total integrated dose, high sensitivity). The customisation and qualification of existing products or investigation of new concepts might be considered. However, these options would inevitably involve high development costs.
While an extensive amount of work has been published on the modelling and control of flexible manipulators in the 1980s and 1990s, the possibility to use vision devices to stabilise an oscillating robotic arm has only been considered very recently and this promising solution has not been discussed at length. In parallel, recent developments on machine vision systems in nuclear environment have been very encouraging. Although they do not deal directly with vibration sensing, they open up new prospects in the use of radiation tolerant cameras.
This thesis aims to demonstrate that vibration control of remote maintenance equipment operating in harsh environments such as ITER can be achieved without considering any extra sensor besides the embarked rad-hardened cameras that will inevitably be used to provide real-time visual feedback to the operators. In other words it is proposed to consider the radiation-tolerant vision devices as full sensors providing quantitative data that can be processed by the control scheme and not only as plain video feedback providing qualitative information. The work conducted within the present thesis has confirmed that methods based on the tracking of visual features from an unknown environment are effective candidates for the real-time control of vibrations. Oscillations induced at the end effector are estimated by exploiting a simple physical model of the manipulator. Using a camera mounted in an eye-in-hand configuration, this model is adjusted using direct measurement of the tip oscillations with respect to the static environment.
The primary contribution of this thesis consists of implementing a markerless tracker to determine the velocity of a tip-mounted camera in an untrimmed environment in order to stabilise an oscillating long-reach robotic arm. In particular, this method implies modifying an existing online interaction matrix estimator to make it self-adjustable and deriving a multimode dynamic model of a flexible rotating beam. An innovative vision-based method using sinusoidal regression to sense low-frequency oscillations is also proposed and tested. Finally, the problem of online estimation of the image capture delay for visual servoing applications with high dynamics is addressed and an original approach based on the concept of cross-correlation is presented and experimentally validated
A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation
The application of laser technologies in surgical interventions has been accepted in the clinical
domain due to their atraumatic properties. In addition to manual application of fibre-guided
lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours
has been prevailed in ENT surgery. However, TLM requires many years of surgical training
for tumour resection in order to preserve the function of adjacent organs and thus preserve the
patient’s quality of life. The positioning of the microscopic laser applicator outside the patient
can also impede a direct line-of-sight to the target area due to anatomical variability and limit
the working space. Further clinical challenges include positioning the laser focus on the tissue
surface, imaging, planning and performing laser ablation, and motion of the target area during
surgery. This dissertation aims to address the limitations of TLM through robotic approaches and
intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no
highly integrated platform for endoscopic delivery of focused laser radiation is available to date.
Likewise, there are no known devices that incorporate scene information from endoscopic imaging
into ablation planning and execution. For focusing of the laser beam close to the target tissue, this
work first presents miniaturised focusing optics that can be integrated into endoscopic systems.
Experimental trials characterise the optical properties and the ablation performance. A robotic
platform is realised for manipulation of the focusing optics. This is based on a variable-length
continuum manipulator. The latter enables movements of the endoscopic end effector in five
degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the
robot are integrated into a modular framework that is evaluated experimentally. The manipulation
of focused laser radiation also requires precise adjustment of the focal position on the tissue. For
this purpose, visual, haptic and visual-haptic assistance functions are presented. These support
the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic
assistance are demonstrated in a user study. The system performance and usability of the overall
robotic system are assessed in an additional user study. Analogous to a clinical scenario, the
subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the
spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser
ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact
laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergefĂĽhrten
Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von
Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion
jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu
sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet
durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische
Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die
Bildgebung, die Planung und AusfĂĽhrung der Laserablation sowie intraoperative Bewegungen
des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch
robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal
invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen fĂĽr die endoskopische
Applikation fokussierter Laserstrahlung verfĂĽgbar. Ebenfalls sind keine Systeme bekannt, die
Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausfĂĽhrung
einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst
eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur
Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem
längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer
mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fĂĽnf Freiheitsgraden.
Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework
eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine
präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation
zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der
visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit
des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu
einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die
mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation
werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen
einen positiven Effekt der Automationskonzepte fĂĽr die kontaktfreie Laserchirurgie
Intraoperative Navigation Systems for Image-Guided Surgery
Recent technological advancements in medical imaging equipment have resulted in
a dramatic improvement of image accuracy, now capable of providing useful information
previously not available to clinicians. In the surgical context, intraoperative
imaging provides a crucial value for the success of the operation.
Many nontrivial scientific and technical problems need to be addressed in order to
efficiently exploit the different information sources nowadays available in advanced
operating rooms. In particular, it is necessary to provide: (i) accurate tracking of
surgical instruments, (ii) real-time matching of images from different modalities, and
(iii) reliable guidance toward the surgical target. Satisfying all of these requisites
is needed to realize effective intraoperative navigation systems for image-guided
surgery.
Various solutions have been proposed and successfully tested in the field of image
navigation systems in the last ten years; nevertheless several problems still arise in
most of the applications regarding precision, usability and capabilities of the existing
systems. Identifying and solving these issues represents an urgent scientific challenge.
This thesis investigates the current state of the art in the field of intraoperative
navigation systems, focusing in particular on the challenges related to efficient and
effective usage of ultrasound imaging during surgery.
The main contribution of this thesis to the state of the art are related to:
Techniques for automatic motion compensation and therapy monitoring applied
to a novel ultrasound-guided surgical robotic platform in the context of
abdominal tumor thermoablation.
Novel image-fusion based navigation systems for ultrasound-guided neurosurgery
in the context of brain tumor resection, highlighting their applicability
as off-line surgical training instruments.
The proposed systems, which were designed and developed in the framework of
two international research projects, have been tested in real or simulated surgical
scenarios, showing promising results toward their application in clinical practice
Development of a Robotic Positioning and Tracking System for a Research Laboratory
Measurement of residual stress using neutron or synchrotron diffraction relies on the accurate alignment of the sample in relation to the gauge volume of the instrument. Automatic sample alignment can be achieved using kinematic models of the positioning system provided the relevant kinematic parameters are known, or can be determined, to a suitable accuracy.
The main problem addressed in this thesis is improving the repeatability and accuracy of the sample positioning for the strain scanning, through the use of techniques from robotic calibration theory to generate kinematic models of both off-the-shelf and custom-built positioning systems. The approach is illustrated using a positioning system in use on the ENGIN-X instrument at the UK’s ISIS pulsed neutron source comprising a traditional XYZΩ table augmented with a triple axis manipulator. Accuracies better than 100microns were achieved for this compound system. Although discussed here in terms of sample positioning systems these methods are entirely applicable to other moving instrument components such as beam shaping jaws and detectors.
Several factors could lead to inaccurate positioning on a neutron or synchrotron diffractometer. It is therefore essential to validate the accuracy of positioning especially during experiments which require a high level of accuracy. In this thesis, a stereo camera system is developed to monitor the sample and other moving parts of the diffractometer. The camera metrology system is designed to measure the positions of retroreflective markers attached to any object that is being monitored. A fully automated camera calibration procedure is developed with an emphasis on accuracy. The potential accuracy of this system is demonstrated and problems that limit accuracy are discussed. It is anticipated that the camera system would be used to correct the positioning system when the error is minimal or notify the user of the error when it is significant
Workshop on "Robotic assembly of 3D MEMS".
Proceedings of a workshop proposed in IEEE IROS'2007.The increase of MEMS' functionalities often requires the integration of various technologies used for mechanical, optical and electronic subsystems in order to achieve a unique system. These different technologies have usually process incompatibilities and the whole microsystem can not be obtained monolithically and then requires microassembly steps. Microassembly of MEMS based on micrometric components is one of the most promising approaches to achieve high-performance MEMS. Moreover, microassembly also permits to develop suitable MEMS packaging as well as 3D components although microfabrication technologies are usually able to create 2D and "2.5D" components. The study of microassembly methods is consequently a high stake for MEMS technologies growth. Two approaches are currently developped for microassembly: self-assembly and robotic microassembly. In the first one, the assembly is highly parallel but the efficiency and the flexibility still stay low. The robotic approach has the potential to reach precise and reliable assembly with high flexibility. The proposed workshop focuses on this second approach and will take a bearing of the corresponding microrobotic issues. Beyond the microfabrication technologies, performing MEMS microassembly requires, micromanipulation strategies, microworld dynamics and attachment technologies. The design and the fabrication of the microrobot end-effectors as well as the assembled micro-parts require the use of microfabrication technologies. Moreover new micromanipulation strategies are necessary to handle and position micro-parts with sufficiently high accuracy during assembly. The dynamic behaviour of micrometric objects has also to be studied and controlled. Finally, after positioning the micro-part, attachment technologies are necessary
Visual Tracking and Motion Estimation for an On-orbit Servicing of a Satellite
This thesis addresses visual tracking of a non-cooperative as well as a partially cooperative satellite, to enable close-range rendezvous between a servicer and a target satellite. Visual tracking and estimation of relative motion between a servicer and a target satellite are critical abilities for rendezvous and proximity operation such as repairing and deorbiting. For this purpose, Lidar has been widely employed in cooperative rendezvous and docking missions. Despite its robustness to harsh space illumination, Lidar has high weight and rotating parts and consumes more power, thus undermines the stringent requirements of a satellite design. On the other hand, inexpensive on-board cameras can provide an effective solution, working at a wide range of distances. However, conditions of space lighting are particularly challenging for image based tracking algorithms, because of the direct sunlight exposure, and due to the glossy surface of the satellite that creates strong reflection and image saturation, which leads to difficulties in tracking procedures. In order to address these difficulties, the relevant literature is examined in the fields of computer vision, and satellite rendezvous and docking. Two classes of problems are identified and relevant solutions, implemented on a standard computer are provided. Firstly, in the absence of a geometric model of the satellite, the thesis presents a robust feature-based method with prediction capability in case of insufficient features, relying on a point-wise motion model. Secondly, we employ a robust model-based hierarchical position localization method to handle change of image features along a range of distances, and localize an attitude-controlled (partially cooperative) satellite. Moreover, the thesis presents a pose tracking method addressing ambiguities in edge-matching, and a pose detection algorithm based on appearance model learning. For the validation of the methods, real camera images and ground truth data, generated with a laboratory tet bed similar to space conditions are used. The experimental results indicate that camera based methods provide robust and accurate tracking for the approach of malfunctioning satellites in spite of the difficulties associated with specularities and direct sunlight. Also exceptional lighting conditions associated to the sun angle are discussed, aimed at achieving fully reliable localization system in a certain mission
- …