463 research outputs found

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Intuitive Instruction of Industrial Robots : A Knowledge-Based Approach

    Get PDF
    With more advanced manufacturing technologies, small and medium sized enterprises can compete with low-wage labor by providing customized and high quality products. For small production series, robotic systems can provide a cost-effective solution. However, for robots to be able to perform on par with human workers in manufacturing industries, they must become flexible and autonomous in their task execution and swift and easy to instruct. This will enable small businesses with short production series or highly customized products to use robot coworkers without consulting expert robot programmers. The objective of this thesis is to explore programming solutions that can reduce the programming effort of sensor-controlled robot tasks. The robot motions are expressed using constraints, and multiple of simple constrained motions can be combined into a robot skill. The skill can be stored in a knowledge base together with a semantic description, which enables reuse and reasoning. The main contributions of the thesis are 1) development of ontologies for knowledge about robot devices and skills, 2) a user interface that provides simple programming of dual-arm skills for non-experts and experts, 3) a programming interface for task descriptions in unstructured natural language in a user-specified vocabulary and 4) an implementation where low-level code is generated from the high-level descriptions. The resulting system greatly reduces the number of parameters exposed to the user, is simple to use for non-experts and reduces the programming time for experts by 80%. The representation is described on a semantic level, which means that the same skill can be used on different robot platforms. The research is presented in seven papers, the first describing the knowledge representation and the second the knowledge-based architecture that enables skill sharing between robots. The third paper presents the translation from high-level instructions to low-level code for force-controlled motions. The two following papers evaluate the simplified programming prototype for non-expert and expert users. The last two present how program statements are extracted from unstructured natural language descriptions

    SurfaceCast: Ubiquitous, Cross-Device Surface Sharing

    Get PDF
    Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications. We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems. This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API. In this paper, we discuss the software architecture of SurfaceCast, present a formative user study and a quantitative performance analysis of our framework, and introduce five example application scenarios which we enhance through the multi-user and multi-device features of the framework. Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    A socio-technical approach for assistants in human-robot collaboration in industry 4.0

    Get PDF
    The introduction of technologies disruptive of Industry 4.0 in the workplace integrated through human cyber-physical systems causes operators to face new challenges. These are reflected in the increased demands presented in the operator's capabilities physical, sensory, and cognitive demands. In this research, cognitive demands are the most interesting. In this perspective, assistants are presented as a possible solution, not as a tool but as a set of functions that amplify human capabilities, such as exoskeletons, collaborative robots for physical capabilities, virtual and augmented reality for sensory capabilities. Perhaps chatbots and softbots for cognitive capabilities, then the need arises to ask ourselves: How can operator assistance systems 4.0 be developed in the context of industrial manufacturing? In which capacities does the operator need more assistance? From the current paradigm of systematization, different approaches are used within the context of the workspace in industry 4.0. Thus, the functional resonance analysis method (FRAM) is used to model the workspace from the sociotechnical system approach, where the relationships between the components are the most important among the functions to be developed by the human-robot team. With the use of simulators for both robots and robotic systems, the behavior of the variability of the human-robot team is analyzed. Furthermore, from the perspective of cognitive systems engineering, the workspace can be studied as a joint cognitive system, where cognition is understood as distributed, in a symbiotic relationship between the human and technological agents. The implementation of a case study as a human-robot collaborative workspace allows evaluating the performance of the human-robot team, the impact on the operator's cognitive abilities, and the level of collaboration achieved in the human-robot team through a set of metrics and proven methods in other areas, such as cognitive systems engineering, human-machine interaction, and ergonomics. We conclude by discussing the findings and outlook regarding future research questions and possible developments.La introducción de tecnologías disruptivas de Industria 4.0 en el lugar de trabajo integradas a través de sistemas ciberfísicos humanos hace que los operadores enfrenten nuevos desafíos. Estos se reflejan en el aumento de las demandas presentadas en las capacidades físicas, sensoriales y cognitivas del operador. En esta investigación, las demandas cognitivas son las más interesantes. En esta perspectiva, los asistentes se presentan como una posible solución, no como una herramienta sino como un conjunto de funciones que amplifican las capacidades humanas, como exoesqueletos, robots colaborativos para capacidades físicas, realidad virtual y aumentada para capacidades sensoriales. Quizás chatbots y softbots para capacidades cognitivas, entonces surge la necesidad de preguntarnos: ¿Cómo se pueden desarrollar los sistemas de asistencia al operador 4.0 en el contexto de la fabricación industrial? ¿En qué capacidades el operador necesita más asistencia? A partir del paradigma actual de sistematización, se utilizan diferentes enfoques dentro del contexto del espacio de trabajo en la industria 4.0. Así, se utiliza el método de análisis de resonancia funcional (FRAM) para modelar el espacio de trabajo desde el enfoque del sistema sociotécnico, donde las relaciones entre los componentes son las más importantes entre las funciones a desarrollar por el equipo humano-robot. Con el uso de simuladores tanto para robots como para sistemas robóticos se analiza el comportamiento de la variabilidad del equipo humano-robot. Además, desde la perspectiva de la ingeniería de sistemas cognitivos, el espacio de trabajo puede ser estudiado como un sistema cognitivo conjunto, donde la cognición se entiende distribuida, en una relación simbiótica entre los agentes humanos y tecnológicos. La implementación de un caso de estudio como un espacio de trabajo colaborativo humano-robot permite evaluar el desempeño del equipo humano-robot, el impacto en las habilidades cognitivas del operador y el nivel de colaboración alcanzado en el equipo humano-robot a través de un conjunto de métricas y métodos probados en otras áreas, como la ingeniería de sistemas cognitivos, la interacción hombre-máquina y la ergonomía. Concluimos discutiendo los hallazgos y las perspectivas con respecto a futuras preguntas de investigación y posibles desarrollos.Postprint (published version

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Efficient and intuitive teaching of redundant robots in task and configuration space

    Get PDF
    Emmerich C. Efficient and intuitive teaching of redundant robots in task and configuration space. Bielefeld: Universität Bielefeld; 2016.A major goal of current robotics research is to enable robots to become co-workers that learn from and collaborate with humans efficiently. This is of particular interest for small and medium-sized enterprises where small batch sizes and frequent changes in production needs demand a high flexibility in the manufacturing processes. A commonly adopted approach to accomplish this goal is the utilization of recently developed lightweight, compliant and kinematically redundant robot platforms in combination with state-of-the-art human-robot interfaces. However, the increased complexity of these robots is not well reflected in most interfaces as the work at hand points out. Plain kinesthetic teaching, a typical attempt to enable lay users programming a robot by physically guiding it through a motion demonstration, not only imposes high cognitive load on the tutor, particularly in the presence of strong environmental constraints. It also neglects the possible reuse of (task-independent) constraints on the redundancy resolution as these have to be demonstrated repeatedly or are modeled explicitly reducing the efficiency of these methods when targeted at non-expert users. In contrast, this thesis promotes a different view investigating human-robot interaction schemes not only from the learner’s but also from the tutor’s perspective. A two-staged interaction structure is proposed that enables lay users to transfer their implicit knowledge about task and environmental constraints incrementally and independently of each other to the robot, and to reuse this knowledge by means of assisted programming controllers. In addition, a path planning approach is derived by properly exploiting the knowledge transfer enabling autonomous navigation in a possibly confined workspace without any cameras or other external sensors. All derived concept are implemented and evaluated thoroughly on a system prototype utilizing the 7-DoF KUKA Lightweight Robot IV. Results of a large user study conducted in the context of this thesis attest the staged interaction to reduce the complexity of teaching redundant robots and show that teaching redundancy resolutions is feasible also for non-expert users. Utilizing properly tailored machine learning algorithms the proposed approach is completely data-driven. Hence, despite a required forward kinematic mapping of the manipulator the entire approach is model-free allowing to implement the derived concepts on a variety of currently available robot platforms

    Cellulo: Tangible Haptic Swarm Robots for Learning

    Get PDF
    Robots are steadily becoming one of the significant 21st century learning technologies that aim to improve education within both formal and informal environments. Such robots, called Robots for Learning, have so far been utilized as constructionist tools or social agents that aided learning from distinct perspectives. This thesis presents a novel approach to Robots for Learning that aims to explore new added values by means of investigating uses for robots in educational scenarios beyond those that are commonly tackled: We develop a platform from scratch to be "as versatile as pen and paper", namely as composed of easy to use objects that feel like they belong in the learning ecosystem while being seamlessly usable across many activities that help teach a variety of subjects. Following this analogy, we design our platform as many low-cost, palm-sized tangible robots that operate on printed paper sheets, controlled by readily available mobile computers such as smartphones or tablets. From the learners' perspective, our robots are thus physical and manipulable points of hands-on interaction with learning activities where they play the role of both abstract and concrete objects that are otherwise not easily represented. We realize our novel platform in four incremental phases, each of which consists of a development stage and multiple subsequent validation stages. First, we develop accurately positioned tangibles, characterize their localization performance and test the learners' interaction with our tangibles in a playful activity. Second, we integrate mobility into our tangibles and make them full-blown robots, characterize their locomotion performance and test the emerging notion of moving vs. being moved in a learning activity. Third, we enable haptic feedback capability on our robots, measure their range of usability and test them within a complete lesson that highlights this newly developed affordance. Fourth, we develop the means of building swarms with our haptic-enabled tangible robots and test the final form of our platform in a lesson co-designed with a teacher. Our effort thus contains the participation of more than 370 child learners over the span of these phases, which leads to the initial insights into this novel Robots for Learning avenue. Besides its main contributions to education, this thesis further contributes to a range of research fields related to our technological developments, such as positioning systems, robotic mechanism design, haptic interfaces and swarm robotics
    • …
    corecore