19 research outputs found

    Towards the Embedding of On-Line Hand-Eye Calibration into Visual Servoing

    Get PDF
    International audienceThis work is related to the visual servoing of a robot hand-mounted camera. The control law of this robotic system exhibits the necessity of determining the position/orientation of a camera reference frame with respect to the control reference frame. The hand-eye calibration consists in the determination of this transformation (i.e. a rotation and a translation). We provide in this paper an on-line calibration method in two stages: rst, an initial estimation is computed with two self-calibration movements, then the estimated hand-eye transformation is updated using the controlled motions of the robot. As the latter generate low amplitude rotations, the classical formulations are not efficient any more. The homogeneous matrix equation AX=XB appearing in usual approaches is therefore reformulated into a linear system and used as a basis of a Kalman lter which is to be run in parallel with the visual servoing. Preliminary simulation results were obtained and show the good behaviour of the method

    Facial Recognition in Public Areas

    Get PDF
    The security of information nowadays is very significant and difficult, so there are a number of ways to improve security. Especially in public areas like airports, railway sta-tions, Universities, ATMs, etc. and security cameras are presently common in these areas. So, in this paper, we are presenting how Facial recognition can be used in public areas like airports, toll gates, offices, etc. We are comparing or matching a face of a person who we want to detect, with the video which is recorded through CCTV. There are certain algorithms to detect faces from video like through HAAR cascades, eigenface, fisher face, etc. open-source computer vision library is used for facial recognition

    A Graph-based Optimization Framework for Hand-Eye Calibration for Multi-Camera Setups

    Full text link
    Hand-eye calibration is the problem of estimating the spatial transformation between a reference frame, usually the base of a robot arm or its gripper, and the reference frame of one or multiple cameras. Generally, this calibration is solved as a non-linear optimization problem, what instead is rarely done is to exploit the underlying graph structure of the problem itself. Actually, the problem of hand-eye calibration can be seen as an instance of the Simultaneous Localization and Mapping (SLAM) problem. Inspired by this fact, in this work we present a pose-graph approach to the hand-eye calibration problem that extends a recent state-of-the-art solution in two different ways: i) by formulating the solution to eye-on-base setups with one camera; ii) by covering multi-camera robotic setups. The proposed approach has been validated in simulation against standard hand-eye calibration methods. Moreover, a real application is shown. In both scenarios, the proposed approach overcomes all alternative methods. We release with this paper an open-source implementation of our graph-based optimization framework for multi-camera setups.Comment: This paper has been accepted for publication at the 2023 IEEE International Conference on Robotics and Automation (ICRA

    A Comparative Review of Hand-Eye Calibration Techniques for Vision Guided Robots

    Get PDF
    Hand-eye calibration enables proper perception of the environment in which a vision guided robot operates. Additionally, it enables the mapping of the scene in the robots frame. Proper hand-eye calibration is crucial when sub-millimetre perceptual accuracy is needed. For example, in robot assisted surgery, a poorly calibrated robot would cause damage to surrounding vital tissues and organs, endangering the life of a patient. A lot of research has gone into ways of accurately calibrating the hand-eye system of a robot with different levels of success, challenges, resource requirements and complexities. As such, academics and industrial practitioners are faced with the challenge of choosing which algorithm meets the implementation requirements based on the identified constraints. This review aims to give a general overview of the strengths and weaknesses of different hand-eye calibration algorithms available to academics and industrial practitioners to make an informed design decision, as well as incite possible areas of research based on the identified challenges. We also discuss different calibration targets which is an important part of the calibration process that is often overlooked in the design process

    Automatic Robot Hand-Eye Calibration Enabled by Learning-Based 3D Vision

    Full text link
    Hand-eye calibration, as a fundamental task in vision-based robotic systems, aims to estimate the transformation matrix between the coordinate frame of the camera and the robot flange. Most approaches to hand-eye calibration rely on external markers or human assistance. We proposed Look at Robot Base Once (LRBO), a novel methodology that addresses the hand-eye calibration problem without external calibration objects or human support, but with the robot base. Using point clouds of the robot base, a transformation matrix from the coordinate frame of the camera to the robot base is established as I=AXB. To this end, we exploit learning-based 3D detection and registration algorithms to estimate the location and orientation of the robot base. The robustness and accuracy of the method are quantified by ground-truth-based evaluation, and the accuracy result is compared with other 3D vision-based calibration methods. To assess the feasibility of our methodology, we carried out experiments utilizing a low-cost structured light scanner across varying joint configurations and groups of experiments. The proposed hand-eye calibration method achieved a translation deviation of 0.930 mm and a rotation deviation of 0.265 degrees according to the experimental results. Additionally, the 3D reconstruction experiments demonstrated a rotation error of 0.994 degrees and a position error of 1.697 mm. Moreover, our method offers the potential to be completed in 1 second, which is the fastest compared to other 3D hand-eye calibration methods. Code is released at github.com/leihui6/LRBO.Comment: 17 pages, 19 figures, 6 tables, submitted to MSS

    Calibration and 3D Mapping for Multi-sensor Inspection Tasks with Industrial Robots

    Get PDF
    Le ispezioni di qualit脿 sono una parte essenziale per garantire che il processo di produzione si svolga senza intoppi e che il prodotto finale soddisfi standard elevati. I robot industriali sono diventati uno strumento fondamentale per condurre le ispezioni di qualit脿, consentendo precisione e coerenza nel processo di ispezione. Utilizzando tecnologie di ispezione avanzate, i robot industriali possono rilevare difetti e anomalie nei prodotti a una velocit脿 superiore a quella degli ispettori umani, migliorando l'efficienza della produzione. Grazie alla capacit脿 di automatizzare le attivit脿 di ispezione ripetitive e noiose, i robot industriali possono anche ridurre il rischio di errore umano e aumentare la qualit脿 dei prodotti. Con il continuo progresso tecnologico, l'uso dei robot industriali per le ispezioni di qualit脿 si sta diffondendo in tutti i settori industriali, da quello automobilistico e manifatturiero a quello aerospaziale. Lo svantaggio di una tale variet脿 di compiti di ispezione 猫 che di solito le ispezioni industriali richiedono configurazioni robotiche specifiche e sensori appropriati, rendendo ogni ispezione molto specifica e personalizzata. Per questo motivo, la presente tesi fornisce una panoramica di un framework di ispezione generale che risolve il problema della creazione di celle di lavoro di ispezione personalizzate, proponendo moduli software generali che possono essere facilmente configurati per affrontare ogni specifico scenario di ispezione. In particolare, questa tesi si concentra sui problemi della calibrazione occhio-mano, ovvero il problema di calcolare con precisione la posizione del sensore nella cella di lavoro rispetto all'inquadratura del robot, e del Data Mapping, utilizzato per mappare i dati del sensore nella rappresentazione del modello 3D dell'oggetto ispezionato. Per la calibrazione occhio-mano proponiamo due tecniche che risolvono con precisione la posizione del sensore in pi霉 configurazioni robotiche. Entrambe considerano la configurazione robot-sensore eye-on-base e eye-in-hand, vale a dire il modo in cui discriminiamo se il sensore 猫 montato in un punto fisso della cella di lavoro o nel braccio terminale del manipolatore robotico, rispettivamente. Inoltre, uno dei principali contributi di questa tesi 猫 un approccio generale alla calibrazione occhio-mano che 猫 anche in grado di gestire, grazie a una formulazione unificata di ottimizzazione del grafo di posa, configurazioni di ispezione in cui sono coinvolti pi霉 sensori (ad esempio, reti multi-camera). In definitiva, questa tesi propone un metodo generale che sfrutta un risultato preciso e accurato della calibrazione occhio-mano per affrontare il problema del Data Mapping per i robot di ispezione multiuso. Questo approccio 猫 stato applicato in diverse configurazioni di ispezione, dall'industria automobilistica a quella aerospaziale e manifatturiera. La maggior parte dei contributi presentati in questa tesi sono disponibili come pacchetti software open-source. Riteniamo che ci貌 favorisca la collaborazione, consenta una precisa ripetibilit脿 dei nostri esperimenti e faciliti la ricerca futura sulla calibrazione di complesse configurazioni robotiche industriali.Quality inspections are an essential part of ensuring the manufacturing process runs smoothly and that the final product meets high standards. Industrial robots have emerged as a key tool in conducting quality inspections, allowing for precision and consistency in the inspection process. By utilizing advanced inspection technologies, industrial robots can detect defects and anomalies in products at a faster pace than human inspectors, improving production efficiency. With the ability to automate repetitive and tedious inspection tasks, industrial robots can also reduce the risk of human error and increase product quality. As technology continues to advance, the use of industrial robots for quality inspections is becoming more widespread across industrial sectors, ranging from automotive and manufactury to aerospace industries. The drawback of such a large variety of inspection tasks is that usually industrial inspections require specific robotic setups and appropriate sensors, making every inspection very specific and custom buildt. For this reason, this thesis gives an overview of a general inspection framework that solves the problem of creating customized inspection workcells by proposing general software modules that can be easily configured to address each specific inspection scenario. In particular, this thesis is focusing on the problems of Hand-eye Calibration, that is the problem of accurately computing the position of the sensor in the workcell with respect to the robot frame, and Data Mapping that is used to map sensor data to the 3D model representation of the inspected object. For the Hand-eye Calibration we propose two techniques that accurately solve the position of the sensor in multiple robotic setups. They both consider eye-on-base and eye-in-hand robot-sensor configuration, namely, this is the way in which we discriminate if the sensor is mounted in a fixed place in the workcell or in the end-effector of the robot manipulator, respectively. Moreover, one of the main contributions of this thesis is a general hand-eye calibration approach that is also capable of handling, thanks to a unified pose-graph optimization formulation, inspection setups where multiple sensors are involved (e.g., multi-camera networks). In the end, this thesis is proposing a general method that takes advantage of a precise and accurate hand-eye calibration result to address the problem of Data Mapping for multi-purpose inspection robots. This approach has been applied in multiple inspection setups, ranging from automotive to aerospace and manufactury industry. Most of the contributions presented in this thesis are available as open-source software packages. We believe that this will foster collaboration, enable precise repeatability of our experiments, and facilitate future research on the calibration of complex industrial robotic setups

    A Visual Velocity Impedance Controller

    Get PDF
    Successful object insertion systems allow the object to translate and rotate to accommodate contact forces. Compliant controllers are used in robotics to provide this accommodation. The impedance compliant controller is one of the more researched and well known compliant controllers used for assembly. The velocity filtered visual impedance controller is introduced as a compliant controller to improve upon the impedance controller. The velocity filtered impedance controller introduces a filter of the velocity impedance and a gain from the stiffness. The velocity impedance controller was found to be more stable over larger ranges of stiffness values than the position based impedance controller. This led to the velocity impedance controller being more accurate and stable with respect to external forces. The velocity impedance controller was also found to have a better compliant response when tested on various insertion geometries in various configurations, including a key insertion acting against gravity. Finally, a novel kinetic friction cone compliance model is introduced for the velocity impedance controller. It was determined that the new compliance model provided a more reliable insertion than the standard insertion model by increasing the error tolerance for failure

    Automatic multi-camera hand-eye calibration for robotic workcells

    Get PDF
    Human-robot collaboration (HRC) is an increasingly successful research field, widely investigated for several industrial tasks. Collaborative robots can physically interact with humans in a shared environment and simultaneously guarantee an high human safety during all the working process. This can be achieved through a vision system equipped by a single or a multi camera system which can provide to the manipulator essential information about the surrounding workspace and human behavior, ensuring the collision avoidance with objects and human operators. However, in order to guarantee human safety and an excellent working system where the robot arm is aware about the surrounding environment and it can monitor operator motions, a reliable Hand-Eye calibration is needed. An additional improvement for a really safe human-robot collaboration scenario can be provided by a multi-camera hand-eye calibration. This process guarantees an improved human safety and give the robot a greater ability for collision avoidance, thanks to the presence of more sensors which ensures a constant and more reliable vision of the robot arm and its whole workspace. This thesis is mainly focused on the development of an automatic multi-camera calibration method for robotic workcells, which guarantees ah high human safety and ensure a really accurate working system. In particular, the proposed method has two main properties. It is automatic, since it exploits the robot arm with a planar target attached on its end-effector to accomplish the image acquisition phase necessary for the calibration, which is generally realized with manual procedures. This approach allows to remove as much as possible the inaccurate human intervention and to speed up the whole calibration process. The second main feature is that our approach enables the calibration of a multi-camera system suitable for robotic workcells that are larger than those commonly considered in the literature. Our multi-camera hand-eye calibration method was tested through several experiments with the Franka Emika Panda robot arm and with different sensors: Microsoft Kinect V2, Intel RealSense depth camera D455 and Intel RealSense LiDAR camera L515, in order to prove its flexibility and to test which are the hardware devices which allow to achieve the highest calibration accuracy. However, really accurate results are generally achieved through our method even in large robotic workcell where cameras are placed at a distance d=3 m from the robot arm, achieving a reprojection error even lower than 1 pixel with respect to other state-of-art methods which can not even guarantee a proper calibration at these distances. Moreover our method is compared against other single- and multi-camera calibration techniques and it was proved that the proposed calibration process achieves highest accuracy with respect to other methods found in literature, which are mainly focused on the calibration between a single camera and the robot arm.Human-robot collaboration (HRC) is an increasingly successful research field, widely investigated for several industrial tasks. Collaborative robots can physically interact with humans in a shared environment and simultaneously guarantee an high human safety during all the working process. This can be achieved through a vision system equipped by a single or a multi camera system which can provide to the manipulator essential information about the surrounding workspace and human behavior, ensuring the collision avoidance with objects and human operators. However, in order to guarantee human safety and an excellent working system where the robot arm is aware about the surrounding environment and it can monitor operator motions, a reliable Hand-Eye calibration is needed. An additional improvement for a really safe human-robot collaboration scenario can be provided by a multi-camera hand-eye calibration. This process guarantees an improved human safety and give the robot a greater ability for collision avoidance, thanks to the presence of more sensors which ensures a constant and more reliable vision of the robot arm and its whole workspace. This thesis is mainly focused on the development of an automatic multi-camera calibration method for robotic workcells, which guarantees ah high human safety and ensure a really accurate working system. In particular, the proposed method has two main properties. It is automatic, since it exploits the robot arm with a planar target attached on its end-effector to accomplish the image acquisition phase necessary for the calibration, which is generally realized with manual procedures. This approach allows to remove as much as possible the inaccurate human intervention and to speed up the whole calibration process. The second main feature is that our approach enables the calibration of a multi-camera system suitable for robotic workcells that are larger than those commonly considered in the literature. Our multi-camera hand-eye calibration method was tested through several experiments with the Franka Emika Panda robot arm and with different sensors: Microsoft Kinect V2, Intel RealSense depth camera D455 and Intel RealSense LiDAR camera L515, in order to prove its flexibility and to test which are the hardware devices which allow to achieve the highest calibration accuracy. However, really accurate results are generally achieved through our method even in large robotic workcell where cameras are placed at a distance d=3 m from the robot arm, achieving a reprojection error even lower than 1 pixel with respect to other state-of-art methods which can not even guarantee a proper calibration at these distances. Moreover our method is compared against other single- and multi-camera calibration techniques and it was proved that the proposed calibration process achieves highest accuracy with respect to other methods found in literature, which are mainly focused on the calibration between a single camera and the robot arm

    Solving the nearest rotation matrix problem in three and four dimensions with applications in robotics

    Get PDF
    Aplicat embargament des de la data de defensa fins ei 31/5/2022Since the map from quaternions to rotation matrices is a 2-to-1 covering map, this map cannot be smoothly inverted. As a consequence, it is sometimes erroneously assumed that all inversions should necessarily contain singularities that arise in the form of quotients where the divisor can be arbitrarily small. This misconception was clarified when we found a new division-free conversion method. This result triggered the research work presented in this thesis. At first glance, the matrix to quaternion conversion does not seem to be a relevant problem. Actually, most researchers consider it as a well-solved problem whose revision is not likely to provide any new insight in any area of practical interest. Nevertheless, we show in this thesis how solving the nearest rotation matrix problem in Frobenius norm can be reduced to a matrix to quaternion conversion. Many problems, such as hand-eye calibration, camera pose estimation, location recognition, image stitching etc. require finding the nearest proper orthogonal matrix to a given matrix. Thus, the matrix to quaternion conversion becomes of paramount importance. While a rotation in 3D can be represented using a quaternion, a rotation in 4D can be represented using a double quaternion. As a consequence, the computation of the nearest rotation matrix in 4D, using our approach, essentially follow the same steps as in the 3D case. Although the 4D case might seem of theoretical interest only, we show in this thesis its practical relevance thanks to a little known mapping between 3D displacements and 4D rotations. In this thesis we focus our attention in obtaining closed-form solutions, in particular those that only require the four basic arithmetic operations because they can easily be implemented on microcomputers with limited computational resources. Moreover, closed-form methods are preferable for at least two reasons: they provide the most meaningful answer because they permit analyzing the influence of each variable on the result; and their computational cost, in terms of arithmetic operations, is fixed and assessable beforehand. We have actually derived closed-form methods specifically tailored for solving the hand-eye calibration and the pointcloud registration problems which outperform all previous approaches.Dado que la funci贸n que aplica a cada cuaterni贸n su matrix de rotaci贸n correspondiente es 2 a 1, la inversa de esta funci贸n no es diferenciable en todo su dominio. Por consiguiente, a veces se asume err贸neamente que todas las inversiones deben contener necesariamente singularidades que surgen en forma de cocientes donde el divisor puede ser arbitrariamente peque帽o. Esta idea err贸nea se aclar贸 cuando encontramos un nuevo m茅todo de conversi贸n sin divisi贸n. Este resultado desencaden贸 el trabajo de investigaci贸n presentado en esta tesis. A primera vista, la conversi贸n de matriz a cuaterni贸n no parece un problema relevante. En realidad, la mayor铆a de los investigadores lo consideran un problema bien resuelto cuya revisi贸n no es probable que proporcione nuevos resultados en ning煤n 谩rea de inter茅s pr谩ctico. Sin embargo, mostramos en esta tesis c贸mo la resoluci贸n del problema de la matriz de rotaci贸n m谩s cercana seg煤n la norma de Frobenius se puede reducir a una conversi贸n de matriz a cuaterni贸n. Muchos problemas, como el de la calibraci贸n mano-c谩mara, el de la estimaci贸n de la pose de una c谩mara, el de la identificaci贸n de una ubicaci贸n, el del solapamiento de im谩genes, etc. requieren encontrar la matriz de rotaci贸n m谩s cercana a una matriz dada. Por lo tanto, la conversi贸n de matriz a cuaterni贸n se vuelve de suma importancia. Mientras que una rotaci贸n en 3D se puede representar mediante un cuaterni贸n, una rotaci贸n en 4D se puede representar mediante un cuaterni贸n doble. Como consecuencia, el c谩lculo de la matriz de rotaci贸n m谩s cercana en 4D, utilizando nuestro enfoque, sigue esencialmente los mismos pasos que en el caso 3D. Aunque el caso 4D pueda parecer de inter茅s te贸rico 煤nicamente, mostramos en esta tesis su relevancia pr谩ctica gracias a una funci贸n poco conocida que relaciona desplazamientos en 3D con rotaciones en 4D. En esta tesis nos centramos en la obtenci贸n de soluciones de forma cerrada, en particular aquellas que solo requieren las cuatro operaciones aritm茅ticas b谩sicas porque se pueden implementar f谩cilmente en microcomputadores con recursos computacionales limitados. Adem谩s, los m茅todos de forma cerrada son preferibles por al menos dos razones: proporcionan la respuesta m谩s significativa porque permiten analizar la influencia de cada variable en el resultado; y su costo computacional, en t茅rminos de operaciones aritm茅ticas, es fijo y evaluable de antemano. De hecho, hemos derivado nuevos m茅todos de forma cerrada dise帽ados espec铆ficamente para resolver el problema de la calibraci贸n mano-c谩mara y el del registro de nubes de puntos cuya eficiencia supera la de todos los m茅todos anteriores.Postprint (published version
    corecore