2,360 research outputs found

    Managing Competing Concerns in Digital Innovation:Examining Welfare Technology in Denmark

    Get PDF

    Calibration and 3D Mapping for Multi-sensor Inspection Tasks with Industrial Robots

    Get PDF
    Le ispezioni di qualità sono una parte essenziale per garantire che il processo di produzione si svolga senza intoppi e che il prodotto finale soddisfi standard elevati. I robot industriali sono diventati uno strumento fondamentale per condurre le ispezioni di qualità, consentendo precisione e coerenza nel processo di ispezione. Utilizzando tecnologie di ispezione avanzate, i robot industriali possono rilevare difetti e anomalie nei prodotti a una velocità superiore a quella degli ispettori umani, migliorando l'efficienza della produzione. Grazie alla capacità di automatizzare le attività di ispezione ripetitive e noiose, i robot industriali possono anche ridurre il rischio di errore umano e aumentare la qualità dei prodotti. Con il continuo progresso tecnologico, l'uso dei robot industriali per le ispezioni di qualità si sta diffondendo in tutti i settori industriali, da quello automobilistico e manifatturiero a quello aerospaziale. Lo svantaggio di una tale varietà di compiti di ispezione è che di solito le ispezioni industriali richiedono configurazioni robotiche specifiche e sensori appropriati, rendendo ogni ispezione molto specifica e personalizzata. Per questo motivo, la presente tesi fornisce una panoramica di un framework di ispezione generale che risolve il problema della creazione di celle di lavoro di ispezione personalizzate, proponendo moduli software generali che possono essere facilmente configurati per affrontare ogni specifico scenario di ispezione. In particolare, questa tesi si concentra sui problemi della calibrazione occhio-mano, ovvero il problema di calcolare con precisione la posizione del sensore nella cella di lavoro rispetto all'inquadratura del robot, e del Data Mapping, utilizzato per mappare i dati del sensore nella rappresentazione del modello 3D dell'oggetto ispezionato. Per la calibrazione occhio-mano proponiamo due tecniche che risolvono con precisione la posizione del sensore in più configurazioni robotiche. Entrambe considerano la configurazione robot-sensore eye-on-base e eye-in-hand, vale a dire il modo in cui discriminiamo se il sensore è montato in un punto fisso della cella di lavoro o nel braccio terminale del manipolatore robotico, rispettivamente. Inoltre, uno dei principali contributi di questa tesi è un approccio generale alla calibrazione occhio-mano che è anche in grado di gestire, grazie a una formulazione unificata di ottimizzazione del grafo di posa, configurazioni di ispezione in cui sono coinvolti più sensori (ad esempio, reti multi-camera). In definitiva, questa tesi propone un metodo generale che sfrutta un risultato preciso e accurato della calibrazione occhio-mano per affrontare il problema del Data Mapping per i robot di ispezione multiuso. Questo approccio è stato applicato in diverse configurazioni di ispezione, dall'industria automobilistica a quella aerospaziale e manifatturiera. La maggior parte dei contributi presentati in questa tesi sono disponibili come pacchetti software open-source. Riteniamo che ciò favorisca la collaborazione, consenta una precisa ripetibilità dei nostri esperimenti e faciliti la ricerca futura sulla calibrazione di complesse configurazioni robotiche industriali.Quality inspections are an essential part of ensuring the manufacturing process runs smoothly and that the final product meets high standards. Industrial robots have emerged as a key tool in conducting quality inspections, allowing for precision and consistency in the inspection process. By utilizing advanced inspection technologies, industrial robots can detect defects and anomalies in products at a faster pace than human inspectors, improving production efficiency. With the ability to automate repetitive and tedious inspection tasks, industrial robots can also reduce the risk of human error and increase product quality. As technology continues to advance, the use of industrial robots for quality inspections is becoming more widespread across industrial sectors, ranging from automotive and manufactury to aerospace industries. The drawback of such a large variety of inspection tasks is that usually industrial inspections require specific robotic setups and appropriate sensors, making every inspection very specific and custom buildt. For this reason, this thesis gives an overview of a general inspection framework that solves the problem of creating customized inspection workcells by proposing general software modules that can be easily configured to address each specific inspection scenario. In particular, this thesis is focusing on the problems of Hand-eye Calibration, that is the problem of accurately computing the position of the sensor in the workcell with respect to the robot frame, and Data Mapping that is used to map sensor data to the 3D model representation of the inspected object. For the Hand-eye Calibration we propose two techniques that accurately solve the position of the sensor in multiple robotic setups. They both consider eye-on-base and eye-in-hand robot-sensor configuration, namely, this is the way in which we discriminate if the sensor is mounted in a fixed place in the workcell or in the end-effector of the robot manipulator, respectively. Moreover, one of the main contributions of this thesis is a general hand-eye calibration approach that is also capable of handling, thanks to a unified pose-graph optimization formulation, inspection setups where multiple sensors are involved (e.g., multi-camera networks). In the end, this thesis is proposing a general method that takes advantage of a precise and accurate hand-eye calibration result to address the problem of Data Mapping for multi-purpose inspection robots. This approach has been applied in multiple inspection setups, ranging from automotive to aerospace and manufactury industry. Most of the contributions presented in this thesis are available as open-source software packages. We believe that this will foster collaboration, enable precise repeatability of our experiments, and facilitate future research on the calibration of complex industrial robotic setups

    Risk-aware Path and Motion Planning for a Tethered Aerial Visual Assistant in Unstructured or Confined Environments

    Get PDF
    This research aims at developing path and motion planning algorithms for a tethered Unmanned Aerial Vehicle (UAV) to visually assist a teleoperated primary robot in unstructured or confined environments. The emerging state of the practice for nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the sensors by providing an external viewpoint. However, the benefits of using an assistant have been limited for at least three reasons: (1) users tend to choose suboptimal viewpoints, (2) only ground robot assistants are considered, ignoring the rapid evolution of small unmanned aerial systems for indoor flying, (3) introducing a whole crew for the second teleoperated robot is not cost effective, may introduce further teamwork demands, and therefore could lead to miscommunication. This dissertation proposes to use an autonomous tethered aerial visual assistant to replace the secondary robot and its operating crew. Along with a pre-established theory of viewpoint quality based on affordances, this dissertation aims at defining and representing robot motion risk in unstructured or confined environments. Based on those theories, a novel high level path planning algorithm is developed to enable risk-aware planning, which balances the tradeoff between viewpoint quality and motion risk in order to provide safe and trustworthy visual assistance flight. The planned flight trajectory is then realized on a tethered UAV platform. The perception and actuation are tailored to fit the tethered agent in the form of a low level motion suite, including a novel tether-based localization model with negligible computational overhead, motion primitives for the tethered airframe based on position and velocity control, and two differentComment: Ph.D Dissertatio

    Risk-aware Path and Motion Planning for a Tethered Aerial Visual Assistant in Unstructured or Confined Environments

    Get PDF
    This research aims at developing path and motion planning algorithms for a tethered Unmanned Aerial Vehicle (UAV) to visually assist a teleoperated primary robot in unstructured or confined environments. The emerging state of the practice for nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the sensors by providing an external viewpoint. However, the benefits of using an assistant have been limited for at least three reasons: (1) users tend to choose suboptimal viewpoints, (2) only ground robot assistants are considered, ignoring the rapid evolution of small unmanned aerial systems for indoor flying, (3) introducing a whole crew for the second teleoperated robot is not cost effective, may introduce further teamwork demands, and therefore could lead to miscommunication. This dissertation proposes to use an autonomous tethered aerial visual assistant to replace the secondary robot and its operating crew. Along with a pre-established theory of viewpoint quality based on affordances, this dissertation aims at defining and representing robot motion risk in unstructured or confined environments. Based on those theories, a novel high level path planning algorithm is developed to enable risk-aware planning, which balances the tradeoff between viewpoint quality and motion risk in order to provide safe and trustworthy visual assistance flight. The planned flight trajectory is then realized on a tethered UAV platform. The perception and actuation are tailored to fit the tethered agent in the form of a low level motion suite, including a novel tether-based localization model with negligible computational overhead, motion primitives for the tethered airframe based on position and velocity control, and two different approaches to negotiate tether with complex obstacle-occupied environments. The proposed research provides a formal reasoning of motion risk in unstructured or confined spaces, contributes to the field of risk-aware planning with a versatile planner, and opens up a new regime of indoor UAV navigation: tethered indoor flight to ensure battery duration and failsafe in case of vehicle malfunction. It is expected to increase teleoperation productivity and reduce costly errors in scenarios such as safe decommissioning and nuclear operations in the Fukushima Daiichi facility

    Human-robot interaction and computer-vision-based services for autonomous robots

    Get PDF
    L'Aprenentatge per Imitació (IL), o Programació de robots per Demostració (PbD), abasta mètodes pels quals un robot aprèn noves habilitats a través de l'orientació humana i la imitació. La PbD s'inspira en la forma en què els éssers humans aprenen noves habilitats per imitació amb la finalitat de desenvolupar mètodes pels quals les noves tasques es poden transferir als robots. Aquesta tesi està motivada per la pregunta genèrica de "què imitar?", Que es refereix al problema de com extreure les característiques essencials d'una tasca. Amb aquesta finalitat, aquí adoptem la perspectiva del Reconeixement d'Accions (AR) per tal de permetre que el robot decideixi el què cal imitar o inferir en interactuar amb un ésser humà. L'enfoc proposat es basa en un mètode ben conegut que prové del processament del llenguatge natural: és a dir, la bossa de paraules (BoW). Aquest mètode s'aplica a grans bases de dades per tal d'obtenir un model entrenat. Encara que BoW és una tècnica d'aprenentatge de màquines que s'utilitza en diversos camps de la investigació, en la classificació d'accions per a l'aprenentatge en robots està lluny de ser acurada. D'altra banda, se centra en la classificació d'objectes i gestos en lloc d'accions. Per tant, en aquesta tesi es demostra que el mètode és adequat, en escenaris de classificació d'accions, per a la fusió d'informació de diferents fonts o de diferents assajos. Aquesta tesi fa tres contribucions: (1) es proposa un mètode general per fer front al reconeixement d'accions i per tant contribuir a l'aprenentatge per imitació; (2) la metodologia pot aplicar-se a grans bases de dades, que inclouen diferents modes de captura de les accions; i (3) el mètode s'aplica específicament en un projecte internacional d'innovació real anomenat Vinbot.El Aprendizaje por Imitación (IL), o Programación de robots por Demostración (PbD), abarca métodos por los cuales un robot aprende nuevas habilidades a través de la orientación humana y la imitación. La PbD se inspira en la forma en que los seres humanos aprenden nuevas habilidades por imitación con el fin de desarrollar métodos por los cuales las nuevas tareas se pueden transferir a los robots. Esta tesis está motivada por la pregunta genérica de "qué imitar?", que se refiere al problema de cómo extraer las características esenciales de una tarea. Con este fin, aquí adoptamos la perspectiva del Reconocimiento de Acciones (AR) con el fin de permitir que el robot decida lo que hay que imitar o inferir al interactuar con un ser humano. El enfoque propuesto se basa en un método bien conocido que proviene del procesamiento del lenguaje natural: es decir, la bolsa de palabras (BoW). Este método se aplica a grandes bases de datos con el fin de obtener un modelo entrenado. Aunque BoW es una técnica de aprendizaje de máquinas que se utiliza en diversos campos de la investigación, en la clasificación de acciones para el aprendizaje en robots está lejos de ser acurada. Además, se centra en la clasificación de objetos y gestos en lugar de acciones. Por lo tanto, en esta tesis se demuestra que el método es adecuado, en escenarios de clasificación de acciones, para la fusión de información de diferentes fuentes o de diferentes ensayos. Esta tesis hace tres contribuciones: (1) se propone un método general para hacer frente al reconocimiento de acciones y por lo tanto contribuir al aprendizaje por imitación; (2) la metodología puede aplicarse a grandes bases de datos, que incluyen diferentes modos de captura de las acciones; y (3) el método se aplica específicamente en un proyecto internacional de innovación real llamado Vinbot.Imitation Learning (IL), or robot Programming by Demonstration (PbD), covers methods by which a robot learns new skills through human guidance and imitation. PbD takes its inspiration from the way humans learn new skills by imitation in order to develop methods by which new tasks can be transmitted to robots. This thesis is motivated by the generic question of “what to imitate?” which concerns the problem of how to extract the essential features of a task. To this end, here we adopt Action Recognition (AR) perspective in order to allow the robot to decide what has to be imitated or inferred when interacting with a human kind. The proposed approach is based on a well-known method from natural language processing: namely, Bag of Words (BoW). This method is applied to large databases in order to obtain a trained model. Although BoW is a machine learning technique that is used in various fields of research, in action classification for robot learning it is far from accurate. Moreover, it focuses on the classification of objects and gestures rather than actions. Thus, in this thesis we show that the method is suitable in action classification scenarios for merging information from different sources or different trials. This thesis makes three contributions: (1) it proposes a general method for dealing with action recognition and thus to contribute to imitation learning; (2) the methodology can be applied to large databases which include different modes of action captures; and (3) the method is applied specifically in a real international innovation project called Vinbot

    Symbiotic interaction between humans and robot swarms

    Get PDF
    Comprising of a potentially large team of autonomous cooperative robots locally interacting and communicating with each other, robot swarms provide a natural diversity of parallel and distributed functionalities, high flexibility, potential for redundancy, and fault-tolerance. The use of autonomous mobile robots is expected to increase in the future and swarm robotic systems are envisioned to play important roles in tasks such as: search and rescue (SAR) missions, transportation of objects, surveillance, and reconnaissance operations. To robustly deploy robot swarms on the field with humans, this research addresses the fundamental problems in the relatively new field of human-swarm interaction (HSI). Four groups of core classes of problems have been addressed for proximal interaction between humans and robot swarms: interaction and communication; swarm-level sensing and classification; swarm coordination; swarm-level learning. The primary contribution of this research aims to develop a bidirectional human-swarm communication system for non-verbal interaction between humans and heterogeneous robot swarms. The guiding field of application are SAR missions. The core challenges and issues in HSI include: How can human operators interact and communicate with robot swarms? Which interaction modalities can be used by humans? How can human operators instruct and command robots from a swarm? Which mechanisms can be used by robot swarms to convey feedback to human operators? Which type of feedback can swarms convey to humans? In this research, to start answering these questions, hand gestures have been chosen as the interaction modality for humans, since gestures are simple to use, easily recognized, and possess spatial-addressing properties. To facilitate bidirectional interaction and communication, a dialogue-based interaction system is introduced which consists of: (i) a grammar-based gesture language with a vocabulary of non-verbal commands that allows humans to efficiently provide mission instructions to swarms, and (ii) a swarm coordinated multi-modal feedback language that enables robot swarms to robustly convey swarm-level decisions, status, and intentions to humans using multiple individual and group modalities. The gesture language allows humans to: select and address single and multiple robots from a swarm, provide commands to perform tasks, specify spatial directions and application-specific parameters, and build iconic grammar-based sentences by combining individual gesture commands. Swarms convey different types of multi-modal feedback to humans using on-board lights, sounds, and locally coordinated robot movements. The swarm-to-human feedback: conveys to humans the swarm's understanding of the recognized commands, allows swarms to assess their decisions (i.e., to correct mistakes: made by humans in providing instructions, and errors made by swarms in recognizing commands), and guides humans through the interaction process. The second contribution of this research addresses swarm-level sensing and classification: How can robot swarms collectively sense and recognize hand gestures given as visual signals by humans? Distributed sensing, cooperative recognition, and decision-making mechanisms have been developed to allow robot swarms to collectively recognize visual instructions and commands given by humans in the form of gestures. These mechanisms rely on decentralized data fusion strategies and multi-hop messaging passing algorithms to robustly build swarm-level consensus decisions. Measures have been introduced in the cooperative recognition protocol which provide a trade-off between the accuracy of swarm-level consensus decisions and the time taken to build swarm decisions. The third contribution of this research addresses swarm-level cooperation: How can humans select spatially distributed robots from a swarm and the robots understand that they have been selected? How can robot swarms be spatially deployed for proximal interaction with humans? With the introduction of spatially-addressed instructions (pointing gestures) humans can robustly address and select spatially- situated individuals and groups of robots from a swarm. A cascaded classification scheme is adopted in which, first the robot swarm identifies the selection command (e.g., individual or group selection), and then the robots coordinate with each other to identify if they have been selected. To obtain better views of gestures issued by humans, distributed mobility strategies have been introduced for the coordinated deployment of heterogeneous robot swarms (i.e., ground and flying robots) and to reshape the spatial distribution of swarms. The fourth contribution of this research addresses the notion of collective learning in robot swarms. The questions that are answered include: How can robot swarms learn about the hand gestures given by human operators? How can humans be included in the loop of swarm learning? How can robot swarms cooperatively learn as a team? Online incremental learning algorithms have been developed which allow robot swarms to learn individual gestures and grammar-based gesture sentences supervised by human instructors in real-time. Humans provide different types of feedback (i.e., full or partial feedback) to swarms for improving swarm-level learning. To speed up the learning rate of robot swarms, cooperative learning strategies have been introduced which enable individual robots in a swarm to intelligently select locally sensed information and share (exchange) selected information with other robots in the swarm. The final contribution is a systemic one, it aims on building a complete HSI system towards potential use in real-world applications, by integrating the algorithms, techniques, mechanisms, and strategies discussed in the contributions above. The effectiveness of the global HSI system is demonstrated in the context of a number of interactive scenarios using emulation tests (i.e., performing simulations using gesture images acquired by a heterogeneous robotic swarm) and by performing experiments with real robots using both ground and flying robots
    corecore