332 research outputs found

    An adaptive compliance Hierarchical Quadratic Programming controller for ergonomic human–robot collaboration

    Get PDF
    This paper proposes a novel Augmented Hierarchical Quadratic Programming (AHQP) framework for multi-tasking control in Human-Robot Collaboration (HRC) which integrates human-related parameters to optimize ergonomics. The aim is to combine parameters that are typical of both industrial applications (e.g. cycle times, productivity) and human comfort (e.g. ergonomics, preference), to identify an optimal trade-off. The augmentation aspect avoids the dependency from a fixed end-effector reference trajectory, which becomes part of the optimization variables and can be used to define a feasible workspace region in which physical interaction can occur. We then demonstrate that the integration of the proposed AHQP in HRC permits the addition of human ergonomics and preference. To achieve this, we develop a human ergonomics function based on the mapping of an ergonomics score, compatible with AHQP formulation. This allows to identify at control level the optimal Cartesian pose that satisfies the active objectives and constraints, that are now linked to human ergonomics. In addition, we build an adaptive compliance framework that integrates both aspects of human preferences and intentions, which are finally tested in several collaborative experiments using the redundant MOCA robot. Overall, we achieve improved human ergonomics and health conditions, aiming at the potential reduction of work-related musculoskeletal disorders

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    sCAM: An Untethered Insertable Laparoscopic Surgical Camera Robot

    Get PDF
    Fully insertable robotic imaging devices represent a promising future of minimally invasive laparoscopic vision. Emerging research efforts in this field have resulted in several proof-of-concept prototypes. One common drawback of these designs derives from their clumsy tethering wires which not only cause operational interference but also reduce camera mobility. Meanwhile, these insertable laparoscopic cameras are manipulated without any pose information or haptic feedback, which results in open loop motion control and raises concerns about surgical safety caused by inappropriate use of force.This dissertation proposes, implements, and validates an untethered insertable laparoscopic surgical camera (sCAM) robot. Contributions presented in this work include: (1) feasibility of an untethered fully insertable laparoscopic surgical camera, (2) camera-tissue interaction characterization and force sensing, (3) pose estimation, visualization, and feedback with sCAM, and (4) robotic-assisted closed-loop laparoscopic camera control. Borrowing the principle of spherical motors, camera anchoring and actuation are achieved through transabdominal magnetic coupling in a stator-rotor manner. To avoid the tethering wires, laparoscopic vision and control communication are realized with dedicated wireless links based on onboard power. A non-invasive indirect approach is proposed to provide real-time camera-tissue interaction force measurement, which, assisted by camera-tissue interaction modeling, predicts stress distribution over the tissue surface. Meanwhile, the camera pose is remotely estimated and visualized using complementary filtering based on onboard motion sensing. Facilitated by the force measurement and pose estimation, robotic-assisted closed-loop control has been realized in a double-loop control scheme with shared autonomy between surgeons and the robotic controller.The sCAM has brought robotic laparoscopic imaging one step further toward less invasiveness and more dexterity. Initial ex vivo test results have verified functions of the implemented sCAM design and the proposed force measurement and pose estimation approaches, demonstrating the technical feasibility of a tetherless insertable laparoscopic camera. Robotic-assisted control has shown its potential to free surgeons from low-level intricate camera manipulation workload and improve precision and intuitiveness in laparoscopic imaging

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Development and assessment of a contactless 3D joystick approach to industrial manipulator gesture control

    Get PDF
    This paper explores a novel design of ergonomic gesture control with visual feedback for the UR3 collaborative robot that aims to allow users with little to no familiarity with robots to complete basic tasks and programming. The principle behind the design mirrors that of a 3D joystick but utilises the Leapmotion device to track the user's hands and prevents any need for a physical joystick or buttons. The Rapid Upper Limb Assessment (RULA) ergonomic tool was used to inform the design and ensure the system was safe for long-term use. The developed system was assessed using the RULA tool for an ergonomic score and through an experiment requiring 19 voluntary participants to complete a basic task with both the gesture system and the UR3's RTP (Robot Teach Pendant), then filling out SUS (System Usability Scale) questionnaires to compare the usability of both systems. The task involved controlling the robot to pick up a pipe and then insert it into a series of slots of decreasing diameter, allowing for both the speed and accuracy of each system to be compared. The experiment found that even those with no previous robot experience were able to complete the tasks after only a brief description of how the gesture system works. Despite beating the RTP's ergonomic score, the system narrowly lost on average usability scores. However, as a contactless gesture system it has other advantages over the RTP and through this experiment many potential improvements were identified, paving the way for future work into assessing the significance of including the visual feedback and comparing this system against other gesture-based systems

    A socio-technical approach for assistants in human-robot collaboration in industry 4.0

    Get PDF
    The introduction of technologies disruptive of Industry 4.0 in the workplace integrated through human cyber-physical systems causes operators to face new challenges. These are reflected in the increased demands presented in the operator's capabilities physical, sensory, and cognitive demands. In this research, cognitive demands are the most interesting. In this perspective, assistants are presented as a possible solution, not as a tool but as a set of functions that amplify human capabilities, such as exoskeletons, collaborative robots for physical capabilities, virtual and augmented reality for sensory capabilities. Perhaps chatbots and softbots for cognitive capabilities, then the need arises to ask ourselves: How can operator assistance systems 4.0 be developed in the context of industrial manufacturing? In which capacities does the operator need more assistance? From the current paradigm of systematization, different approaches are used within the context of the workspace in industry 4.0. Thus, the functional resonance analysis method (FRAM) is used to model the workspace from the sociotechnical system approach, where the relationships between the components are the most important among the functions to be developed by the human-robot team. With the use of simulators for both robots and robotic systems, the behavior of the variability of the human-robot team is analyzed. Furthermore, from the perspective of cognitive systems engineering, the workspace can be studied as a joint cognitive system, where cognition is understood as distributed, in a symbiotic relationship between the human and technological agents. The implementation of a case study as a human-robot collaborative workspace allows evaluating the performance of the human-robot team, the impact on the operator's cognitive abilities, and the level of collaboration achieved in the human-robot team through a set of metrics and proven methods in other areas, such as cognitive systems engineering, human-machine interaction, and ergonomics. We conclude by discussing the findings and outlook regarding future research questions and possible developments.La introducción de tecnologías disruptivas de Industria 4.0 en el lugar de trabajo integradas a través de sistemas ciberfísicos humanos hace que los operadores enfrenten nuevos desafíos. Estos se reflejan en el aumento de las demandas presentadas en las capacidades físicas, sensoriales y cognitivas del operador. En esta investigación, las demandas cognitivas son las más interesantes. En esta perspectiva, los asistentes se presentan como una posible solución, no como una herramienta sino como un conjunto de funciones que amplifican las capacidades humanas, como exoesqueletos, robots colaborativos para capacidades físicas, realidad virtual y aumentada para capacidades sensoriales. Quizás chatbots y softbots para capacidades cognitivas, entonces surge la necesidad de preguntarnos: ¿Cómo se pueden desarrollar los sistemas de asistencia al operador 4.0 en el contexto de la fabricación industrial? ¿En qué capacidades el operador necesita más asistencia? A partir del paradigma actual de sistematización, se utilizan diferentes enfoques dentro del contexto del espacio de trabajo en la industria 4.0. Así, se utiliza el método de análisis de resonancia funcional (FRAM) para modelar el espacio de trabajo desde el enfoque del sistema sociotécnico, donde las relaciones entre los componentes son las más importantes entre las funciones a desarrollar por el equipo humano-robot. Con el uso de simuladores tanto para robots como para sistemas robóticos se analiza el comportamiento de la variabilidad del equipo humano-robot. Además, desde la perspectiva de la ingeniería de sistemas cognitivos, el espacio de trabajo puede ser estudiado como un sistema cognitivo conjunto, donde la cognición se entiende distribuida, en una relación simbiótica entre los agentes humanos y tecnológicos. La implementación de un caso de estudio como un espacio de trabajo colaborativo humano-robot permite evaluar el desempeño del equipo humano-robot, el impacto en las habilidades cognitivas del operador y el nivel de colaboración alcanzado en el equipo humano-robot a través de un conjunto de métricas y métodos probados en otras áreas, como la ingeniería de sistemas cognitivos, la interacción hombre-máquina y la ergonomía. Concluimos discutiendo los hallazgos y las perspectivas con respecto a futuras preguntas de investigación y posibles desarrollos.Postprint (published version

    Development of an intelligent personal assistant to empower operators in industry 4.0 environments

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIndustry 4.0 brings a high level of automation to industrial environments and changes the way companies operate, both in operational aspects and in human relations. It is important to define the role of the millions of operators affected in this new socioeconomic paradigm, integrating new technologies and empowering the workforce to take advantage of aspects such as the flexibility and versatility that human operators bring to production lines. To advance the implementation of this objective, this work proposes the development of an intelligent personal assistant, using concepts of human-in-the-loop cyber-physical systems and context awareness, to assist operators during manufacturing tasks, providing the necessary information for the fulfillment of operations and verifying the accuracy to inform them about possible errors. The implementation is divided in two parts. The first part focuses on an application that supports the real-time operations that can be present in the industry, such as pick and place in warehouses and the assembly of complex equipment on an assembly line. Through an interface, the instruction is given and, using artificial vision techniques with images coming from an IntelRealsense camera, it verifies if the operation is being correctly performed. The gathering of this information occurs in a context awareness algorithm, fulfilling the requirement of intelligent personal assistant and providing feedback to the operator so that the tasks are performed with efficiency and lower incidence of errors. The second part includes the training of these operators in an immersive environment through a virtual reality equipment such as the Oculus Go. The immersive scenario, developed in Unity3D, uses as a model the real workbench, bringing the possibility of performing these trainings in any environment and excluding the need to use real equipment, which could be damaged by the user’s inexperience. The results achieved during the validation tests performed in these two parts, commenting on the strengths, challenges and failures found in the system in general. These results are also qualitatively compared with traditional applications of the proposed case studies in order to prove the fulfillment of the objectives proposed in this work. Finally, the usability test is presented, which provides data on weak points in the user experience for possible improvements in future work.A indústria 4.0 traz um nível elevado de automação a ambientes industriais e muda a forma em que empresas funcionam, tanto em aspectos operacionais quanto em relações humanas. É importante a definição do papel dos milhões de operadores afetados neste novo paradigma socioeconômico, fazendo a integração das novas tecnologias e capacitando a mão de obra para fazer proveito de aspectos como a flexibilidade e versatilidade que operadores humanos trazem às linhas de produção. Para avançar a implementação deste objetivo, este trabalho propõe o desenvolvimento de uma assistente pessoal inteligente, utilizando conceitos de human-in-the-loop cyberphysical systems e context awareness, para auxiliar operadores durante tarefas de manufatura, provendo informações necessárias para o cumprimento de operações e verificando a acurácia para informá-lo sobre possíveis erros. A implementação está dividida em duas partes. A primeira parte foca em uma aplicação de operações em tempo real que podem estar presentes na indústria como pick-andplace em armazéns e a montagem de equipamentos complexos em uma linha de montagem. Através de uma interface é dada a instrução a ser realizada e, utilizando técnicas de visão artificial, com imagens vindas de uma câmera IntelRealsense, verifica se a operação está sendo corretamente executada. A junção dessas informações ocorre em um algoritmo de context awareness, cumprindo o requisito de assistente pessoal inteligente e fornecendo o feedback ao operador para que as tarefas sejam realizadas com eficiência e menor incidência de erros. Já a segunda parte engloba o treinamento destes operadores em um ambiente imersivo através de um equipamento de realidade virtual como o Oculus Go. O cenário, desenvolvido no Unity3D, utiliza como modelo a bancada real, trazendo a possibilidade de se realizar esses treinamentos em qualquer ambiente, excluindo a necessidade da utilização de equipamentos reais e possíveis danos originados de inexperiência do usuário. Os resultados apresentam os testes de validação realizados nestas duas partes, comentando os pontos fortes, desafios e falhas encontradas no sistema em geral. Estes resultados também são comparados qualitativamente com aplicações tradicionais dos casos de estudo propostos de forma a comprovar o cumprimento dos objetivos propostos neste trabalho. Por fim, é apresentado o teste de usabilidade que fornece dados em pontos fracos na experiência de usuários para possíveis melhorias em futuros trabalhos

    AN INTEGRATED AUGMENTED REALITY METHOD TO ASSEMBLY SIMULATION AND GUIDANCE

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Smart Technologies for Precision Assembly

    Get PDF
    This open access book constitutes the refereed post-conference proceedings of the 9th IFIP WG 5.5 International Precision Assembly Seminar, IPAS 2020, held virtually in December 2020. The 16 revised full papers and 10 revised short papers presented together with 1 keynote paper were carefully reviewed and selected from numerous submissions. The papers address topics such as assembly design and planning; assembly operations; assembly cells and systems; human centred assembly; and assistance methods in assembly
    corecore