10 research outputs found
Visual servoing for low-cost SCARA robots using an RGB-D camera as the only sensor
Visual servoing with a simple, two-step hand–eye calibration for robot arms in Selective Compliance Assembly Robot Arm configuration, along with the method for simple vision-based grasp planning, is proposed. The proposed approach is designed for low-cost, vision-guided robots,
where tool positioning is achieved by visual servoing using marker tracking and depth information provided by an RGB-D camera, without encoders or any other sensors. The calibration is based on identification of the dominant horizontal plane in the camera field of view, and an
assumption that all robot axes are perpendicular to the identified plane. Along with the plane parameters, one rotational movement of the shoulder joint provides sufficient information for visual servoing. The grasp planning is based on bounding boxes of simple objects detected in
the RGB-D image, which provide sufficient information for robot tool positioning, gripper orientation and opening width. The developed methods are experimentally tested using a real robot arm. The accuracy of the proposed approach is analysed by measuring the positioning accuracy as well as by performing grasping experiments
Exploração inteligente de objetos por manipulador robótico
The end goal of this dissertation is to develop an autonomous exploration
robot that is capable of choosing the Next Best View which reveals the most
amount of information about a given volume.
The exploration solution is based on a robotic manipulator, a RGB-D sensor
and ROS. The manipulator provides movement while the sensor evaluates the
scene in its Field of View. Using an OcTree implementation to reconstruct
the environment, the portions of the de ned exploration volume where no
information has been gathered yet are segmented. This segmentation (or
clustering) will help on the pose sampling operation in the sense that all
generated poses are plausible. Ray casting is performed, either based on the
sensor's resolution or the characteristics of the unknown scene, to assess the
pose quality. The pose that is estimated to provide the evaluation of the
highest amount of unknown space is the one chosen to be visited next, i.e.,
the Next Best View. The exploration reaches its end when all the unknown
voxels have been evaluated or, those who were not, are not possible to be
measured by any reachable pose.
Two case studies are presented to test the performance and adaptability of
this work. The developed system is able to explore a given scene which,
initially, it has no information about. The solution provided is, not only,
adaptable to changes in the environment during the exploration, but also,
portable to other manipualtors rather than the one used in the development.O objetivo nal desta dissertação é desenvolver um robot de exploração
autônomo capaz de escolher a Próxima Melhor Vista que revela a maior
quantidade de informações sobre um determinado volume.
A solução de exploração é baseada num manipulador robótico, num sensor
RGB-D e em ROS. O manipulador proporciona movimento enquanto o
sensor avalia a cena no seu campo de visão. Usando uma implementação Oc-
Tree para reconstruir o ambiente, as partes do volume de exploração de nido
onde nenhuma informação ainda foi recolhida são segmentadas. Esta segmenta
ção (ou agrupamento) ajudará na operação de amostragem de poses
no sentido em que todas as poses geradas são plausíveis. Ray casting é
realizado, seja com base na resolução do sensor ou nas características da
cena desconhecida, para avaliar a qualidade da pose. A pose que é estimado
fornecer a avaliação da maior quantidade de espaço desconhecido é
a escolhida para ser visitada em seguida, ou seja, a Próxima Melhor Vista.
A exploração chega ao m quando todos os voxels desconhecidos tiverem
sido avaliados ou, aqueles que não o foram, não sejam possíveis de serem
medidos por qualquer pose alcançável.
Dois casos de estudo são apresentados para testar o desempenho e adaptabilidade
deste trabalho. O sistema desenvolvido é capaz de explorar uma
determinada cena sobre a qual, inicialmente, não tem informação. A solução
apresentada é, não só, adaptável às mudanças no ambiente durante a explora
ção, mas também, portável para outros manipuladores que não o utilizado
no desenvolvimento.Mestrado em Engenharia Mecânic
Agent and object aware tracking and mapping methods for mobile manipulators
The age of the intelligent machine is upon us. They exist in our factories, our warehouses, our military, our hospitals, on our roads, and on the moon. Most of these things we call robots. When placed in a
controlled or known environment such as an automotive factory or a distribution warehouse they perform their given roles with exceptional efficiency, achieving far more than is within reach of a humble human being. Despite the remarkable success of intelligent machines in such domains, they have yet to make a full-hearted deployment into our homes. The missing link between the robots we have now and the robots that are soon to come to our houses is perception.
Perception as we mean it here refers to a level of understanding beyond the collection and aggregation of sensory data. Much of the available sensory information is noisy and unreliable, our homes contain many reflective surfaces, repeating textures on large flat surfaces, and many disruptive moving elements, including humans. These environments change over time, with objects frequently moving within and between rooms.
This idea of change in an environment is fundamental to robotic applications, as in most cases we expect them to be effectors of such change. We can identify two particular challenges1 that must be solved for robots to make the jump to less structured environments - how to manage noise and disruptive elements in observational data, and how to understand the world as a set of changeable elements (objects) which move over time within a wider environment. In this thesis we look at one possible approach to solving each of these problems.
For the first challenge we use proprioception aboard a robot with an articulated arm to handle difficult
and unreliable visual data caused both by the robot and the environment. We use sensor data aboard the robot to improve the pose tracking of a visual system when the robot moves rapidly, with high jerk, or when observing a scene with little visual variation.
For the second challenge, we build a model of the world on the level of rigid objects, and relocalise them both as they change location between different sequences and as they move. We use semantics, image keypoints, and 3D geometry to register and align objects between sequences, showing how their position has moved between disparate observations.Open Acces
Calibration of spatial relationships between multiple robots and sensors
Classic hand-eye calibration methods have been limited to single robots and sensors. Recently a new calibration formulation for multiple robots has been proposed that solves for the extrinsic calibration parameters for each robot simultaneously instead of sequentially. The existing solutions for this new problem required data to have correspondence, but Ma, Goh and Chirikjian (MGC) proposed a probabilistic method to solve this problem which eliminated the need for correspondence. In this thesis, the literature of the various robot-sensor calibration problems and solutions are surveyed, and the MGC method is reviewed in detail. Lastly comparison with other methods using numerical simulations were carried out to draw some conclusions
Hand-Eye and Robot-World Calibration by Global Polynomial Optimization
International audienceThe need to relate measurements made by a camera to a different known coordinate system arises in many engineering applications. Historically, it appeared for the first time in the connection with cameras mounted on robotic systems. This problem is commonly known as hand-eye calibration. In this paper, we present several formulations of hand-eye calibration that lead to multivariate polynomial optimization problems. We show that the method of convex linear matrix inequality (LMI) relaxations can be used to effectively solve these problems and to obtain globally optimal solutions. Further, we show that the same approach can be used for the simultaneous hand-eye and robot-world calibration. Finally, we validate the proposed solutions using both synthetic and real datasets
A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation
The application of laser technologies in surgical interventions has been accepted in the clinical
domain due to their atraumatic properties. In addition to manual application of fibre-guided
lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours
has been prevailed in ENT surgery. However, TLM requires many years of surgical training
for tumour resection in order to preserve the function of adjacent organs and thus preserve the
patient’s quality of life. The positioning of the microscopic laser applicator outside the patient
can also impede a direct line-of-sight to the target area due to anatomical variability and limit
the working space. Further clinical challenges include positioning the laser focus on the tissue
surface, imaging, planning and performing laser ablation, and motion of the target area during
surgery. This dissertation aims to address the limitations of TLM through robotic approaches and
intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no
highly integrated platform for endoscopic delivery of focused laser radiation is available to date.
Likewise, there are no known devices that incorporate scene information from endoscopic imaging
into ablation planning and execution. For focusing of the laser beam close to the target tissue, this
work first presents miniaturised focusing optics that can be integrated into endoscopic systems.
Experimental trials characterise the optical properties and the ablation performance. A robotic
platform is realised for manipulation of the focusing optics. This is based on a variable-length
continuum manipulator. The latter enables movements of the endoscopic end effector in five
degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the
robot are integrated into a modular framework that is evaluated experimentally. The manipulation
of focused laser radiation also requires precise adjustment of the focal position on the tissue. For
this purpose, visual, haptic and visual-haptic assistance functions are presented. These support
the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic
assistance are demonstrated in a user study. The system performance and usability of the overall
robotic system are assessed in an additional user study. Analogous to a clinical scenario, the
subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the
spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser
ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact
laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergeführten
Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von
Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion
jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu
sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet
durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische
Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die
Bildgebung, die Planung und Ausführung der Laserablation sowie intraoperative Bewegungen
des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch
robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal
invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen für die endoskopische
Applikation fokussierter Laserstrahlung verfügbar. Ebenfalls sind keine Systeme bekannt, die
Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausführung
einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst
eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur
Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem
längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer
mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fünf Freiheitsgraden.
Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework
eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine
präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation
zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der
visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit
des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu
einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die
mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation
werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen
einen positiven Effekt der Automationskonzepte für die kontaktfreie Laserchirurgie