159 research outputs found
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts
This paper presents a multi-robot system for manufacturing personalized
medical stent grafts. The proposed system adopts a modular design, which
includes: a (personalized) mandrel module, a bimanual sewing module, and a
vision module. The mandrel module incorporates the personalized geometry of
patients, while the bimanual sewing module adopts a learning-by-demonstration
approach to transfer human hand-sewing skills to the robots. The human
demonstrations were firstly observed by the vision module and then encoded
using a statistical model to generate the reference motion trajectories. During
autonomous robot sewing, the vision module plays the role of coordinating
multi-robot collaboration. Experiment results show that the robots can adapt to
generalized stent designs. The proposed system can also be used for other
manipulation tasks, especially for flexible production of customized products
and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial
Informatics, Key words: modularity, medical device customization, multi-robot
system, robot learning, visual servoing, robot sewin
MULTI-RATE VISUAL FEEDBACK ROBOT CONTROL
[EN] This thesis deals with two characteristic problems in visual feedback robot control: 1) sensor latency; 2) providing suitable trajectories for the robot and for the measurement in the image. All the approaches presented in this work are analyzed and implemented on a 6 DOF industrial robot manipulator or/and a wheeled robot.
Focusing on the sensor latency problem, this thesis proposes the use of dual-rate high order holds within the control loop of robots. In this sense, the main contributions are:
- Dual-rate high order holds based on primitive functions for robot control (Chapter 3): analysis of the system performance with and without the use of this multi-rate technique from non-conventional control. In addition, as consequence of the use of dual-rate holds, this work obtains and validates multi-rate controllers, especially dual-rate PIDs.
- Asynchronous dual-rate high order holds based on primitive functions with time delay compensation (Chapter 3): generalization of asynchronous dual-rate high order holds incorporating an input signal time delay compensation component, improving thus the inter-sampling estimations computed by the hold. It is provided an analysis of the properties of such dual-rate holds with time delay compensation, comparing them with estimations obtained by the equivalent dual-rate holds without this compensation, as well as their implementation and validation within the control loop of a 6 DOF industrial robot manipulator.
- Multi-rate nonlinear high order holds (Chapter 4): generalization of the concept of dual-rate high order holds with nonlinear estimation models, which include information about the plant to be controlled, the controller(s) and sensor(s) used, obtained from machine learning techniques. Thus, in order to obtain such a nonlinear hold, it is described a methodology non dependent of the machine technique used, although validated using artificial neural networks. Finally, an analysis of the properties of these new holds is carried out, comparing them with their equivalents based on primitive functions, as well as their implementation and validation within the control loop of an industrial robot manipulator and a wheeled robot.
With respect to the problem of providing suitable trajectories for the robot and for the measurement in the image, this thesis presents the novel reference features filtering control strategy and its generalization from a multi-rate point of view. The main contributions in this regard are:
- Reference features filtering control strategy (Chapter 5): a new control strategy is proposed to enlarge significantly the solution task reachability of robot visual feedback control. The main idea is to use optimal trajectories proposed by a non-linear EKF predictor-smoother (ERTS), based on Rauch-Tung-Striebel (RTS) algorithm, as new feature references for an underlying visual feedback controller. In this work it is provided both the description of the implementation algorithm and its implementation and validation utilizing an industrial robot manipulator.
- Dual-rate Reference features filtering control strategy (Chapter 5): a generalization of the reference features filtering approach from a multi-rate point of view, and a dual Kalman-smoother step based on the relation of the sensor and controller frequencies of the reference filtering control strategy is provided, reducing the computational cost of the former algorithm, as well as addressing the problem of the sensor latency. The implementation algorithms, as well as its analysis, are described.[ES] La presente tesis propone soluciones para dos problemas característicos de los sistemas robóticos cuyo bucle de control se cierra únicamente empleando sensores de visión artificial: 1) la latencia del sensor; 2) la obtención de trayectorias factibles tanto para el robot así como para las medidas obtenidas en la imagen. Todos los métodos propuestos en este trabajo son analizados, validados e implementados utilizando brazo robot industrial de 6 grados de libertad y/o en un robot con ruedas.
Atendiendo al problema de la latencia del sensor, esta tesis propone el uso de retenedores bi-frequencia de orden alto dentro de los lazos de control de robots. En este aspecto las principales contribuciones son:
-Retenedores bi-frecuencia de orden alto basados en funciones primitivas dentro de lazos de control de robots (Capítulo 3): análisis del comportamiento del sistema con y sin el uso de esta técnica de control no convencional. Además, como consecuencia del empleo de los retenedores, obtención y validación de controladores multi-frequencia, concretamente de PIDs bi-frecuencia.
-Retenedores bi-frecuencia asíncronos de orden alto basados en funciones primitivas con compensación de retardos (Capítulo 3): generalización de los retenedores bi-frecuencia asíncronos de orden alto incluyendo una componente de compensación del retardo en la señal de entrada, mejorando así las estimaciones inter-muestreo calculadas por el retenedor. Se proporciona un análisis de las propiedades de los retenedores con compensación del retardo, comparándolas con las obtenidas por sus predecesores sin compensación, así como su implementación y validación en un brazo robot de 6 grados de libertad.
-Retenedores multi-frecuencia no lineales de orden alto (Capítulo 4): generalización del concepto de retenedor bi-frecuencia de orden alto con modelos de estimación no lineales, los cuales incluyen información tanto de la planta a controlar, como del controlador(es) y sensor(es) empleado(s), obtenida a partir de técnicas de aprendizaje. Así pues, para obtener dicho retenedor no lineal, se describe una metodología independiente de la herramienta de aprendizaje utilizada, aunque validada con el uso de redes neuronales artificiales. Finalmente se realiza un análisis de las propiedades de estos nuevos retenedores, comparándolos con sus predecesores basados en funciones primitivas, así como su implementación y validación en un brazo robot de 6 grados de libertad y en un robot móvil con ruedas.
Por lo que respecta al problema de generación de trayectorias factibles para el robot y para la medida en la imagen, esta tesis propone la nueva estrategia de control basada en el filtrado de la referencia y su generalización desde el punto de vista multi-frecuencial.
-Estrategia de control basada en el filtrado de la referencia (Capítulo 5): una nueva estrategia de control se propone para ampliar significativamente el espacio de soluciones de los sistemas robóticos realimentados con sensores de visión artificial. La principal idea es utilizar las trayectorias óptimas obtenidas por una trayectoria predicha por un filtro de Kalman seguido de un suavizado basado en el algoritmo Rauch-Tung-Striebel (RTS) como nuevas referencias para un controlador dado. En este trabajo se proporciona tanto la descripción del algoritmo como su implementación y validación empleando un brazo robótico industrial.
-Estrategia de control bi-frecuencia basada en el filtrado de la referencia (Capítulo 5): generalización de la estrategia de control basada en filtrado de la referencia desde un punto de vista multi-frecuencial, con un filtro de Kalman multi-frecuencia y un Kalman-smoother dual basado en la relación existente entre las frecuencias del sensor y del controlador, reduciendo así el coste computacional del algoritmo y, al mismo tiempo, dando solución al problema de la latencia del sensor. La validación se realiza utilizando un barzo robot industria asi[CA] La present tesis proposa solucions per a dos problemes característics dels sistemes robòtics el els que el bucle de control es tanca únicament utilitzant sensors de visió artificial: 1) la latència del sensor; 2) l'obtenció de trajectòries factibles tant per al robot com per les mesures en la imatge. Tots els mètodes proposats en aquest treball son analitzats, validats e implementats utilitzant un braç robot industrial de 6 graus de llibertat i/o un robot amb rodes.
Atenent al problema de la latència del sensor, esta tesis proposa l'ús de retenidors bi-freqüència d'ordre alt a dins del llaços de control de robots. Al respecte, les principals contribucions son:
- Retenidors bi-freqüència d'ordre alt basats en funcions primitives a dintre dels llaços de control de robots (Capítol 3): anàlisis del comportament del sistema amb i sense l'ús d'aquesta tècnica de control no convencional. A més a més, com a conseqüència de l'ús dels retenidors, obtenció i validació de controladors multi-freqüència, concretament de PIDs bi-freqüència.
- Retenidors bi-freqüència asíncrons d'ordre alt basats en funcions primitives amb compensació de retards (Capítol 3): generalització dels retenidors bi-freqüència asíncrons d'ordre alt inclouen una component de compensació del retràs en la senyal d'entrada al retenidor, millorant així les estimacions inter-mostreig calculades per el retenidor. Es proporciona un anàlisis de les propietats dels retenidors amb compensació del retràs, comparant-les amb les obtingudes per el seus predecessors sense la compensació, així com la seua implementació i validació en un braç robot industrial de 6 graus de llibertat.
- Retenidors multi-freqüència no-lineals d'ordre alt (Capítol 4): generalització del concepte de retenidor bi-freqüència d'ordre alt amb models d'estimació no lineals, incloent informació tant de la planta a controlar, com del controlador(s) i sensor(s) utilitzat(s), obtenint-la a partir de tècniques d'aprenentatge. Així doncs, per obtindre el retenidor no lineal, es descriu una metodologia independent de la ferramenta d'aprenentatge utilitzada, però validada amb l'ús de rets neuronals artificials. Finalment es realitza un anàlisis de les propietats d'aquestos nous retenidors, comparant-los amb els seus predecessors basats amb funcions primitives, així com la seua implementació i validació amb un braç robot de 6 graus de llibertat i amb un robot mòbil de rodes.
Per el que respecta al problema de generació de trajectòries factibles per al robot i per la mesura en la imatge, aquesta tesis proposa la nova estratègia de control basada amb el filtrat de la referència i la seua generalització des de el punt de vista multi-freqüència.
- Estratègia de control basada amb el filtrat de la referència (Capítol 5): una nova estratègia de control es proposada per ampliar significativament l'espai de solucions dels sistemes robòtics realimentats amb sensors de visió artificial. La principal idea es la d'utilitzar les trajectòries optimes obtingudes per una trajectòria predita per un filtre de Kalman seguit d'un suavitzat basat en l'algoritme Rauch-Tung-Striebel (RTS) com noves referències per a un control donat. En aquest treball es proporciona tant la descripció del algoritme així com la seua implementació i validació utilitzant un braç robòtic industrial de 6 graus de llibertat.
- Estratègia de control bi-freqüència basada en el filtrat (Capítol 5): generalització de l'estratègia de control basada am filtrat de la referència des de un punt de vista multi freqüència, amb un filtre de Kalman multi freqüència i un Kalman-Smoother dual basat amb la relació existent entre les freqüències del sensor i del controlador, reduint així el cost computacional de l'algoritme i, al mateix temps, donant solució al problema de la latència del sensor. L'algoritme d'implementació d'aquesta tècnica, així com la seua validaciSolanes Galbis, JE. (2015). MULTI-RATE VISUAL FEEDBACK ROBOT CONTROL [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57951TESI
Recommended from our members
Real-time robotic tasks for cyber-physical avatars
Although modern robots can perform complex tasks using sophisticated algorithms that are specialized to a particular task and environment, creating robots capable of completing tasks in unstructured environments without human guidance (e.g., through teleoperation) remains a challenge. In this research, we present a framework to meet this challenge for a "cyberphysical avatar," which is defined to be a semi-autonomous robotic system that adjusts to an unstructured environment and performs physical tasks subject to critical timing constraints while under human supervision. This thesis first realizes a cyberphysical avatar that integrates three key technologies: (1) whole body-compliant control, (2) skill acquisition from machine learning (neuroevolution methods and deep learning), and (3) vision-based control through visual servoing. Body-compliant control is essential for operator safety because avatars perform cooperative tasks in close proximity to humans; machine learning enables "programming" avatars such that they can be used by non-experts for a large array of tasks, some unforeseen, in an unstructured environment; the visual servoing technique is indispensable for facilitating feedback control in human avatar interaction. This thesis proposes and demonstrates a systematically incremental approach to automating robotic tasks by decomposing a non-trivial task into stages, each of which may be automated by integrating the aforementioned techniques. We design and implement the controllers for two semi-autonomous robots that integrate three key techniques for grasping and pick-and-place tasks. While a general theory is beyond reach, we present a study on the tradeoffs between three design metrics for robotic task systems: (1) the amount of training effort for the robots to perform the task, (2) the time available to complete the task when the command is given, and (3) the quality of the result of the performed task. The tradeoff study in this design space uses the imprecise computation model as a framework to evaluate specific types of tasks: (1) grasping an unknown object and (2) placing the object in a target position. We demonstrate the generality of our integration methodology by applying it to two different robots, Dreamer and Hoppy. Our approach is evaluated by the performance of the robots in trading off between task completion time, training time and task completion success rate, in an environment similar to those in the recent Amazon Picking Challenge.Computer Science
High Speed Neuromorphic Vision-Based Inspection of Countersinks in Automated Manufacturing Processes
Countersink inspection is crucial in various automated assembly lines,
especially in the aerospace and automotive sectors. Advancements in machine
vision introduced automated robotic inspection of countersinks using laser
scanners and monocular cameras. Nevertheless, the aforementioned sensing
pipelines require the robot to pause on each hole for inspection due to high
latency and measurement uncertainties with motion, leading to prolonged
execution times of the inspection task. The neuromorphic vision sensor, on the
other hand, has the potential to expedite the countersink inspection process,
but the unorthodox output of the neuromorphic technology prohibits utilizing
traditional image processing techniques. Therefore, novel event-based
perception algorithms need to be introduced. We propose a countersink detection
approach on the basis of event-based motion compensation and the mean-shift
clustering principle. In addition, our framework presents a robust event-based
circle detection algorithm to precisely estimate the depth of the countersink
specimens. The proposed approach expedites the inspection process by a factor
of 10 compared to conventional countersink inspection methods. The work
in this paper was validated for over 50 trials on three countersink workpiece
variants. The experimental results show that our method provides a precision of
0.025 mm for countersink depth inspection despite the low resolution of
commercially available neuromorphic cameras.Comment: 14 pages, 11 figures, 7 tables, submitted to Journal of Intelligent
Manufacturin
A multi-robot cooperation framework for sewing personalized stent grafts
This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required
Image-Guided Robot-Assisted Techniques with Applications in Minimally Invasive Therapy and Cell Biology
There are several situations where tasks can be performed better robotically rather than manually. Among these are situations (a) where high accuracy and robustness are required, (b) where difficult or hazardous working conditions exist, and (c) where very large or very small motions or forces are involved. Recent advances in technology have resulted in smaller size robots with higher accuracy and reliability. As a result, robotics is fi nding more and more applications in Biomedical Engineering. Medical Robotics and Cell Micro-Manipulation are two of these applications involving interaction with delicate living organs at very di fferent scales.Availability of a wide range of imaging modalities from ultrasound and X-ray fluoroscopy to high magni cation optical microscopes, makes it possible to use imaging as a powerful means to guide and control robot manipulators. This thesis includes three parts focusing on three applications of Image-Guided Robotics in biomedical engineering, including: Vascular Catheterization: a robotic system was developed to insert a
catheter through the vasculature and guide it to a desired point via visual servoing. The system provides shared control with the operator to perform a task semi-automatically or through master-slave control. The system provides control of a catheter tip with high accuracy while reducing X-ray exposure to the clinicians and providing a more ergonomic situation for the cardiologists. Cardiac Catheterization: a master-slave robotic system was developed
to perform accurate control of a steerable catheter to touch and ablate faulty regions on the inner walls of a beating heart in order to treat arrhythmia. The system facilitates touching and making contact with a target point in a beating heart chamber through master-slave control with coordinated visual feedback. Live Neuron Micro-Manipulation: a microscope image-guided robotic
system was developed to provide shared control over multiple micro-manipulators to touch cell membranes in order to perform patch clamp electrophysiology.
Image-guided robot-assisted techniques with master-slave control were implemented for each case to provide shared control between a human operator and a robot. The results show increased accuracy and reduced operation time in all three cases
A mosaic of eyes
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …