136 research outputs found

    Design, evaluation, and control of nexus: a multiscale additive manufacturing platform with integrated 3D printing and robotic assembly.

    Get PDF
    Additive manufacturing (AM) technology is an emerging approach to creating three-dimensional (3D) objects and has seen numerous applications in medical implants, transportation, aerospace, energy, consumer products, etc. Compared with manufacturing by forming and machining, additive manufacturing techniques provide more rapid, economical, efficient, reliable, and complex manufacturing processes. However, additive manufacturing also has limitations on print strength and dimensional tolerance, while traditional additive manufacturing hardware platforms for 3D printing have limited flexibility. In particular, part geometry and materials are limited to most 3D printing hardware. In addition, for multiscale and complex products, samples must be printed, fabricated, and transferred among different additive manufacturing platforms in different locations, which leads to high cost, long process time, and low yield of products. This thesis investigates methods to design, evaluate, and control the NeXus, which is a novel custom robotic platform for multiscale additive manufacturing with integrated 3D printing and robotic assembly. NeXus can be used to prototype miniature devices and systems, such as wearable MEMS sensor fabrics, microrobots for wafer-scale microfactories, tactile robot skins, next generation energy storage (solar cells), nanostructure plasmonic devices, and biosensors. The NeXus has the flexibility to fixture, position, transport, and assemble components across a wide spectrum of length scales (Macro-Meso-Micro-Nano, 1m to 100nm) and provides unparalleled additive process capabilities such as 3D printing through both aerosol jetting and ultrasonic bonding and forming, thin-film photonic sintering, fiber loom weaving, and in-situ Micro-Electro-Mechanical System (MEMS) packaging and interconnect formation. The NeXus system has a footprint of around 4m x 3.5m x 2.4m (X-Y-Z) and includes two industrial robotic arms, precision positioners, multiple manipulation tools, and additive manufacturing processes and packaging capabilities. The design of the NeXus platform adopted the Lean Robotic Micromanufacturing (LRM) design principles and simulation tools to mitigate development risks. The NeXus has more than 50 degrees of freedom (DOF) from different instruments, precise evaluation of the custom robots and positioners is indispensable before employing them in complex and multiscale applications. The integration and control of multi-functional instruments is also a challenge in the NeXus system due to different communication protocols and compatibility. Thus, the NeXus system is controlled by National Instruments (NI) LabVIEW real-time operating system (RTOS) with NI PXI controller and a LabVIEW State Machine User Interface (SMUI) and was programmed considering the synchronization of various instruments and sequencing of additive manufacturing processes for different tasks. The operation sequences of each robot along with relevant tools must be organized in safe mode to avoid crashes and damage to tools during robots’ motions. This thesis also describes two demonstrators that are realized by the NeXus system in detail: skin tactile sensor arrays and electronic textiles. The fabrication process of the skin tactile sensor uses the automated manufacturing line in the NeXus with pattern design, precise calibration, synchronization of an Aerosol Jet printer, and a custom positioner. The fabrication process for electronic textiles is a combination of MEMS fabrication techniques in the cleanroom and the collaboration of multiple NeXus robots including two industrial robotic arms and a custom high-precision positioner for the deterministic alignment process

    High-precision grasping and placing for mobile robots

    Get PDF
    This work presents a manipulation system for multiple labware in life science laboratories using the H20 mobile robots. The H20 robot is equipped with the Kinect V2 sensor to identify and estimate the position of the required labware on the workbench. The local features recognition based on SURF algorithm is used. The recognition process is performed for the labware to be grasped and for the workbench holder. Different grippers and labware containers are designed to manipulate different weights of labware and to realize a safe transportation

    Aspects of User Experience in Augmented Reality

    Get PDF

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children

    Distributed framework for a multi-purpose household robotic arm

    Get PDF
    Projecte final de carrera fet en col.laboració amb l'Institut de Robòtica i Informàtica IndustrialThe concept of household robotic servants has been in our mind for ages, and domestic appliances are far more robotised than they used to be. At present, manufacturers are starting to introduce small, household human-interactive robots to the market. Any human-interactive device has safety, endurability and simplicity constraints, which are especially strict when it comes to robots. Indeed, we are still far from a multi-purpose intelligent household robot, but human-interactive robots and arti cial intelligence research has evolved considerably, demonstration prototypes are a proof of what can be done. This project contributes to the research in humaninteractive robots, as the robotic arm and hand used are specially designed for human-interactive applications. The present study provides a distributed framework for an arm and a hand devices based on the robotics YARP protocol using the WAMTM arm and the BarrettHandTM as well as a basic modular client application complemented with vision. Firstly, two device drivers and a network interface are designed and implemented to control the WAMTM arm and the BarrettHandTM from the network. The drivers allow abstract access to each device, providing three ports: command requests port, state requests port and asynchronous replies port. Secondly, each driver is then encapsulated by YARP devices publishing realtime monitoring feedback and motion control to the network through what is called a Network wrapper. In particular, the network wrapper for the WAMTM arm and BarrettHandTM provides a state port, command port, Remote Procedure Call (RPC) port and an asynchronous noti cations port. The state port provides the WAMTM position and orientation feedback at 50 Hz, which represents a maximum blindness of one centimetre. This rst part of the project sets the foundations of a distributed, complete robot, whose design enables processing and power payload to be shared by di erent workstations. Moreover, users are able to work with the robot remotely over Ethernet and Wireless through a clear, understandable local interface within YARP. In addition to the distributed robotic framework provided, a client software framework with vision is also supplied. The client framework establishes a general software shell for further development and is organized in the basic, separate robotic branches: control, vision and plani cation. The vision module supports distributed image grabbing on mobile robotics, and shared-memory for xed, local vision. In order to incorporate environment interaction and robot autonomy with the planner, hand-eye transformation matrices have been obtained to perform object grasping and manipulation. The image processing is based on OpenCV libraries and provides object recognition with Scale Invariant Feature Transform (SIFT) features matching, Hough transform and polygon approximation algorithms. Grasping and path planning use pre-de ned grasps which take into account the size, shape and orientation of the target objects. The proof-of-concept applications feature a household robotic arm with the ability to tidy randomly distributed common kitchen objects to speci ed locations, with robot real-time monitoring and basic control. The device modularity introduced in this project philosophy of decoupling communication, device local access and the components, was successful. Thanks to the abstract access and decoupling, the demonstration applications provided were easily deployed to test the arm's performance and its remote control and monitorization. Moreover, both resultant frameworks are arm-independent and the design is currently being adopted by other projects' devices within the IRI

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos
    • …
    corecore