1,421 research outputs found

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    Symbiotic human-robot collaborative assembly

    Get PDF

    Smart Navigation in Surgical Robotics

    Get PDF
    La cirugía mínimamente invasiva, y concretamente la cirugía laparoscópica, ha supuesto un gran cambio en la forma de realizar intervenciones quirúrgicas en el abdomen. Actualmente, la cirugía laparoscópica ha evolucionado hacia otras técnicas aún menos invasivas, como es la cirugía de un solo puerto, en inglés Single Port Access Surgery. Esta técnica consiste en realizar una única incisión, por la que son introducidos los instrumentos y la cámara laparoscópica a través de un único trocar multipuerto. La principal ventaja de esta técnica es una reducción de la estancia hospitalaria por parte del paciente, y los resultados estéticos, ya que el trocar se suele introducir por el ombligo, quedando la cicatriz oculta en él. Sin embargo, el hecho de que los instrumentos estén introducidos a través del mismo trocar hace la intervención más complicada para el cirujano, que necesita unas habilidades específicas para este tipo de intervenciones. Esta tesis trata el problema de la navegación de instrumentos quirúrgicos mediante plataformas robóticas teleoperadas en cirugía de un solo puerto. En concreto, se propone un método de navegación que dispone de un centro de rotación remoto virtual, el cuál coincide con el punto de inserción de los instrumentos (punto de fulcro). Para estimar este punto se han empleado las fuerzas ejercidas por el abdomen en los instrumentos quirúrgicos, las cuales han sido medidas por sensores de esfuerzos colocados en la base de los instrumentos. Debido a que estos instrumentos también interaccionan con tejido blando dentro del abdomen, lo cual distorsionaría la estimación del punto de inserción, es necesario un método que permita detectar esta circunstancia. Para solucionar esto, se ha empleado un detector de interacción con tejido basado en modelos ocultos de Markov el cuál se ha entrenado para detectar cuatro gestos genéricos. Por otro lado, en esta tesis se plantea el uso de guiado háptico para mejorar la experiencia del cirujano cuando utiliza plataformas robóticas teleoperadas. En concreto, se propone la técnica de aprendizaje por demostración (Learning from Demonstration) para generar fuerzas que puedan guiar al cirujano durante la resolución de tareas específicas. El método de navegación propuesto se ha implantado en la plataforma quirúrgica CISOBOT, desarrollada por la Universidad de Málaga. Los resultados experimentales obtenidos validan tanto el método de navegación propuesto, como el detector de interacción con tejido blando. Por otro lado, se ha realizado un estudio preliminar del sistema de guiado háptico. En concreto, se ha empleado una tarea genérica, la inserción de una clavija, para realizar los experimentos necesarios que permitan demostrar que el método propuesto es válido para resolver esta tarea y otras similares

    The development of a human-robot interface for industrial collaborative system

    Get PDF
    Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Haptic feedback in teleoperation in Micro-and Nano-Worlds.

    No full text
    International audienceRobotic systems have been developed to handle very small objects, but their use remains complex and necessitates long-duration training. Simulators, such as molecular simulators, can provide access to large amounts of raw data, but only highly trained users can interpret the results of such systems. Haptic feedback in teleoperation, which provides force-feedback to an operator, appears to be a promising solution for interaction with such systems, as it allows intuitiveness and flexibility. However several issues arise while implementing teleoperation schemes at the micro-nanoscale, owing to complex force-fields that must be transmitted to users, and scaling differences between the haptic device and the manipulated objects. Major advances in such technology have been made in recent years. This chapter reviews the main systems in this area and highlights how some fundamental issues in teleoperation for micro- and nano-scale applications have been addressed. The chapter considers three types of teleoperation, including: (1) direct (manipulation of real objects); (2) virtual (use of simulators); and (3) augmented (combining real robotic systems and simulators). Remaining issues that must be addressed for further advances in teleoperation for micro-nanoworlds are also discussed, including: (1) comprehension of phenomena that dictate very small object (< 500 micrometers) behavior; and (2) design of intuitive 3-D manipulation systems. Design guidelines to realize an intuitive haptic feedback teleoperation system at the micro-nanoscale level are proposed

    A System for Human-Robot Teaming through End-User Programming and Shared Autonomy

    Full text link
    Many industrial tasks-such as sanding, installing fasteners, and wire harnessing-are difficult to automate due to task complexity and variability. We instead investigate deploying robots in an assistive role for these tasks, where the robot assumes the physical task burden and the skilled worker provides both the high-level task planning and low-level feedback necessary to effectively complete the task. In this article, we describe the development of a system for flexible human-robot teaming that combines state-of-the-art methods in end-user programming and shared autonomy and its implementation in sanding applications. We demonstrate the use of the system in two types of sanding tasks, situated in aircraft manufacturing, that highlight two potential workflows within the human-robot teaming setup. We conclude by discussing challenges and opportunities in human-robot teaming identified during the development, application, and demonstration of our system.Comment: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24), March 11 - 14, 2024, Boulder, CO, US
    corecore